Nov 23 06:44:22 crc systemd[1]: Starting Kubernetes Kubelet... Nov 23 06:44:22 crc restorecon[4470]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:44:22 crc restorecon[4470]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 23 06:44:23 crc kubenswrapper[4681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:44:23 crc kubenswrapper[4681]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 23 06:44:23 crc kubenswrapper[4681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:44:23 crc kubenswrapper[4681]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:44:23 crc kubenswrapper[4681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 23 06:44:23 crc kubenswrapper[4681]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.110881 4681 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115439 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115474 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115481 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115486 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115491 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115496 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115501 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115505 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115508 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115511 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115515 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115518 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115521 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115525 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115528 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115532 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115536 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115539 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115542 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115545 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115548 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115552 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115556 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115560 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115564 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115568 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115573 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115576 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115580 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115585 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115589 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115596 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115599 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115603 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115606 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115610 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115613 4681 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115616 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115619 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115622 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115625 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115628 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115631 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115635 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115639 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115643 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115646 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115650 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115653 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115656 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115659 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115662 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115665 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115668 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115671 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115675 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115679 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115682 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115686 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115689 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115692 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115695 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115698 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115701 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115705 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115708 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115712 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115716 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115720 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115723 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.115728 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115799 4681 flags.go:64] FLAG: --address="0.0.0.0" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115808 4681 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115815 4681 flags.go:64] FLAG: --anonymous-auth="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115820 4681 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115825 4681 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115829 4681 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115835 4681 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115839 4681 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115843 4681 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115847 4681 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115851 4681 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115864 4681 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115869 4681 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115873 4681 flags.go:64] FLAG: --cgroup-root="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115876 4681 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115880 4681 flags.go:64] FLAG: --client-ca-file="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115884 4681 flags.go:64] FLAG: --cloud-config="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115888 4681 flags.go:64] FLAG: --cloud-provider="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115892 4681 flags.go:64] FLAG: --cluster-dns="[]" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115897 4681 flags.go:64] FLAG: --cluster-domain="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115900 4681 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115904 4681 flags.go:64] FLAG: --config-dir="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115907 4681 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115912 4681 flags.go:64] FLAG: --container-log-max-files="5" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115921 4681 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115925 4681 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115929 4681 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115933 4681 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115937 4681 flags.go:64] FLAG: --contention-profiling="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115941 4681 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115956 4681 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115961 4681 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115965 4681 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115971 4681 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115975 4681 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115979 4681 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115983 4681 flags.go:64] FLAG: --enable-load-reader="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115987 4681 flags.go:64] FLAG: --enable-server="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.115991 4681 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116206 4681 flags.go:64] FLAG: --event-burst="100" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116211 4681 flags.go:64] FLAG: --event-qps="50" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116216 4681 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116220 4681 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116225 4681 flags.go:64] FLAG: --eviction-hard="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116230 4681 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116234 4681 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116238 4681 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116243 4681 flags.go:64] FLAG: --eviction-soft="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116247 4681 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116251 4681 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116255 4681 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116259 4681 flags.go:64] FLAG: --experimental-mounter-path="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116263 4681 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116266 4681 flags.go:64] FLAG: --fail-swap-on="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116270 4681 flags.go:64] FLAG: --feature-gates="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116275 4681 flags.go:64] FLAG: --file-check-frequency="20s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116278 4681 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116282 4681 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116286 4681 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116290 4681 flags.go:64] FLAG: --healthz-port="10248" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116293 4681 flags.go:64] FLAG: --help="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116297 4681 flags.go:64] FLAG: --hostname-override="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116302 4681 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116306 4681 flags.go:64] FLAG: --http-check-frequency="20s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116310 4681 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116314 4681 flags.go:64] FLAG: --image-credential-provider-config="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116318 4681 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116321 4681 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116325 4681 flags.go:64] FLAG: --image-service-endpoint="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116329 4681 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116333 4681 flags.go:64] FLAG: --kube-api-burst="100" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116337 4681 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116341 4681 flags.go:64] FLAG: --kube-api-qps="50" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116344 4681 flags.go:64] FLAG: --kube-reserved="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116349 4681 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116352 4681 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116356 4681 flags.go:64] FLAG: --kubelet-cgroups="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116360 4681 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116364 4681 flags.go:64] FLAG: --lock-file="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116367 4681 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116373 4681 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116377 4681 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116389 4681 flags.go:64] FLAG: --log-json-split-stream="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116394 4681 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116398 4681 flags.go:64] FLAG: --log-text-split-stream="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116401 4681 flags.go:64] FLAG: --logging-format="text" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116405 4681 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116410 4681 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116413 4681 flags.go:64] FLAG: --manifest-url="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116417 4681 flags.go:64] FLAG: --manifest-url-header="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116422 4681 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116426 4681 flags.go:64] FLAG: --max-open-files="1000000" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116431 4681 flags.go:64] FLAG: --max-pods="110" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116435 4681 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116439 4681 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116442 4681 flags.go:64] FLAG: --memory-manager-policy="None" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116446 4681 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116450 4681 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116453 4681 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116474 4681 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116485 4681 flags.go:64] FLAG: --node-status-max-images="50" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116489 4681 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116493 4681 flags.go:64] FLAG: --oom-score-adj="-999" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116497 4681 flags.go:64] FLAG: --pod-cidr="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116500 4681 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116507 4681 flags.go:64] FLAG: --pod-manifest-path="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116510 4681 flags.go:64] FLAG: --pod-max-pids="-1" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116514 4681 flags.go:64] FLAG: --pods-per-core="0" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116517 4681 flags.go:64] FLAG: --port="10250" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116521 4681 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116525 4681 flags.go:64] FLAG: --provider-id="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116528 4681 flags.go:64] FLAG: --qos-reserved="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116539 4681 flags.go:64] FLAG: --read-only-port="10255" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116543 4681 flags.go:64] FLAG: --register-node="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116547 4681 flags.go:64] FLAG: --register-schedulable="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116551 4681 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116557 4681 flags.go:64] FLAG: --registry-burst="10" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116561 4681 flags.go:64] FLAG: --registry-qps="5" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116565 4681 flags.go:64] FLAG: --reserved-cpus="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116569 4681 flags.go:64] FLAG: --reserved-memory="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116574 4681 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116578 4681 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116582 4681 flags.go:64] FLAG: --rotate-certificates="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116586 4681 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116590 4681 flags.go:64] FLAG: --runonce="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116594 4681 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116598 4681 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116601 4681 flags.go:64] FLAG: --seccomp-default="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116605 4681 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116609 4681 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116614 4681 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116617 4681 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116621 4681 flags.go:64] FLAG: --storage-driver-password="root" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116625 4681 flags.go:64] FLAG: --storage-driver-secure="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116628 4681 flags.go:64] FLAG: --storage-driver-table="stats" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116632 4681 flags.go:64] FLAG: --storage-driver-user="root" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116635 4681 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116639 4681 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116643 4681 flags.go:64] FLAG: --system-cgroups="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116646 4681 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116652 4681 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116656 4681 flags.go:64] FLAG: --tls-cert-file="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116660 4681 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116667 4681 flags.go:64] FLAG: --tls-min-version="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116672 4681 flags.go:64] FLAG: --tls-private-key-file="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116676 4681 flags.go:64] FLAG: --topology-manager-policy="none" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116680 4681 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116684 4681 flags.go:64] FLAG: --topology-manager-scope="container" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116688 4681 flags.go:64] FLAG: --v="2" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116693 4681 flags.go:64] FLAG: --version="false" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116697 4681 flags.go:64] FLAG: --vmodule="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116702 4681 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.116706 4681 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116807 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116812 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116816 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116820 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116824 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116827 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116831 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116835 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116838 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116842 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116846 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116849 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116853 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116856 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116860 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116863 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116866 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116870 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116873 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116876 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116880 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116883 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116886 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116891 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116895 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116899 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116902 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116905 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116908 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116912 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116916 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116920 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116924 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116927 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116930 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116934 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116937 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116940 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116953 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116956 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116960 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116964 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116968 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116971 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116975 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116978 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116981 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116985 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116988 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116991 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116994 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.116998 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117001 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117004 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117011 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117016 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117019 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117022 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117025 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117028 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117033 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117037 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117041 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117044 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117048 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117051 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117056 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117060 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117064 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117067 4681 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.117071 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.117420 4681 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.123604 4681 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.123633 4681 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123706 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123721 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123725 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123729 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123732 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123736 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123739 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123742 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123746 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123749 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123753 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123758 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123763 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123767 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123771 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123774 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123778 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123781 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123785 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123788 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123793 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123797 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123801 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123805 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123808 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123812 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123816 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123819 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123822 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123828 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123831 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123835 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123838 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123841 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123844 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123848 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123851 4681 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123854 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123857 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123860 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123863 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123867 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123870 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123873 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123876 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123879 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123882 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123885 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123889 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123892 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123895 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123898 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123901 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123905 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123909 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123913 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123917 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123920 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123924 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123927 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123930 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123933 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123936 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123940 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123943 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123957 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123960 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123964 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123968 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123973 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.123976 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.123982 4681 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124085 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124091 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124094 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124098 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124102 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124105 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124109 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124113 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124117 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124120 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124124 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124128 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124132 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124136 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124140 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124144 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124147 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124150 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124154 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124157 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124161 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124164 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124167 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124171 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124174 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124177 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124180 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124183 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124187 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124190 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124193 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124197 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124200 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124203 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124207 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124211 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124215 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124219 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124223 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124226 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124229 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124234 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124238 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124242 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124246 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124249 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124252 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124256 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124259 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124262 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124265 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124268 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124271 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124275 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124278 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124281 4681 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124284 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124287 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124291 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124294 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124297 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124300 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124303 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124306 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124310 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124315 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124318 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124321 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124325 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124328 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.124331 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.124337 4681 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.124504 4681 server.go:940] "Client rotation is on, will bootstrap in background" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.127005 4681 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.127093 4681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.128077 4681 server.go:997] "Starting client certificate rotation" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.128102 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.128710 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-17 05:50:26.741012717 +0000 UTC Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.128764 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.139303 4681 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.140681 4681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.141174 4681 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.151647 4681 log.go:25] "Validated CRI v1 runtime API" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.170706 4681 log.go:25] "Validated CRI v1 image API" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.172671 4681 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.175118 4681 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-23-06-40-42-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.175157 4681 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.185737 4681 manager.go:217] Machine: {Timestamp:2025-11-23 06:44:23.184360785 +0000 UTC m=+0.253870021 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2445406 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a4227fe6-6af4-43a0-a77f-7b8ab03d3548 BootID:a407e0b2-9c3a-4221-8e9d-4076c1148487 Filesystems:[{Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d9:83:ce Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:d9:83:ce Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:3e:ae:83 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:20:4d:0c Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:eb:65:94 Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:90:92:77 Speed:-1 Mtu:1436} {Name:eth10 MacAddress:82:6f:7a:c5:b1:f4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3e:89:ab:0f:99:1c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.185919 4681 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.186026 4681 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.186741 4681 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.186927 4681 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.186960 4681 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.187161 4681 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.187172 4681 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.187605 4681 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.187638 4681 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.188077 4681 state_mem.go:36] "Initialized new in-memory state store" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.188160 4681 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.189516 4681 kubelet.go:418] "Attempting to sync node with API server" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.189539 4681 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.189561 4681 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.189571 4681 kubelet.go:324] "Adding apiserver pod source" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.189582 4681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.191401 4681 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.191901 4681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.192564 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.192645 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.192794 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.192848 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.193105 4681 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194058 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194081 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194090 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194097 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194109 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194115 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194122 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194135 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194142 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194164 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194183 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194190 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194547 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.194900 4681 server.go:1280] "Started kubelet" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.195413 4681 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.195412 4681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 06:44:23 crc systemd[1]: Started Kubernetes Kubelet. Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.197738 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.198695 4681 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.199416 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.199446 4681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.201063 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:12:46.801587388 +0000 UTC Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.201129 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 167h28m23.600459775s for next certificate rotation Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.201590 4681 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.201615 4681 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.202496 4681 factory.go:55] Registering systemd factory Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.202521 4681 factory.go:221] Registration of the systemd container factory successfully Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.203338 4681 factory.go:153] Registering CRI-O factory Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.203422 4681 factory.go:221] Registration of the crio container factory successfully Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.203657 4681 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.203700 4681 factory.go:103] Registering Raw factory Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.203723 4681 manager.go:1196] Started watching for new ooms in manager Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.203317 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.26.82:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a8fc1020da08d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 06:44:23.194878093 +0000 UTC m=+0.264387330,LastTimestamp:2025-11-23 06:44:23.194878093 +0000 UTC m=+0.264387330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.208506 4681 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.208731 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.209045 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.82:6443: connect: connection refused" interval="200ms" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.209665 4681 server.go:460] "Adding debug handlers to kubelet server" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.211290 4681 manager.go:319] Starting recovery of all containers Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.212387 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.212575 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216158 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216215 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216229 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216240 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216251 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216261 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216271 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216282 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216293 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216303 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216316 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216326 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216335 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216347 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216355 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216367 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216375 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216385 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216394 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216404 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216413 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216422 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216431 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216440 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216449 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216476 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216490 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216500 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216508 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216517 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216530 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216539 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216550 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216558 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216569 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216579 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216588 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216597 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216606 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216618 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216627 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216636 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216645 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216654 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216663 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216673 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216684 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216694 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216703 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216714 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216726 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216736 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216750 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216759 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216770 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216782 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216791 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216800 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216811 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216819 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216843 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216852 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216861 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216870 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216880 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216889 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216899 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216910 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216921 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216930 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.216939 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217763 4681 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217784 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217794 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217803 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217813 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217822 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217831 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217841 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217860 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217870 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217881 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217892 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217902 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217913 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217934 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217944 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217956 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217965 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.217992 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218004 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218015 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218026 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218037 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218047 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218058 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218067 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218080 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218092 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218103 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218114 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218124 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218136 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218146 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218156 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218179 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218190 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218202 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218215 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218228 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218239 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218255 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218292 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218305 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218317 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218329 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218340 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218351 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218363 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218374 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218385 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218395 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218407 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218417 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218428 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218439 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218451 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218537 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218549 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218561 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218575 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218586 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218599 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218611 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218623 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218633 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218645 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218684 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218696 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218707 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218717 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218729 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218740 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218754 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218766 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218778 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218791 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218804 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218816 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218828 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218843 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218854 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218866 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218878 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218890 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218901 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218913 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218924 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218935 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218949 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218962 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218974 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.218995 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219006 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219016 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219029 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219040 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219051 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219063 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219084 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219098 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219109 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219123 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219134 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219146 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219159 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219170 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219182 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219193 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219204 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219217 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219229 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219241 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219253 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219267 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219279 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219294 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219305 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219317 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219329 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219340 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219352 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219363 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219375 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219386 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219396 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219408 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219420 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219431 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219442 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219454 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219479 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219490 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219502 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219514 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219525 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219536 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219549 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219559 4681 reconstruct.go:97] "Volume reconstruction finished" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.219568 4681 reconciler.go:26] "Reconciler: start to sync state" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.234972 4681 manager.go:324] Recovery completed Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.247368 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.248691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.248738 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.248752 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.249091 4681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.249435 4681 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.249453 4681 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.249487 4681 state_mem.go:36] "Initialized new in-memory state store" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.250517 4681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.250565 4681 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.250609 4681 kubelet.go:2335] "Starting kubelet main sync loop" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.250661 4681 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.251921 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.252030 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.255839 4681 policy_none.go:49] "None policy: Start" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.256858 4681 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.256895 4681 state_mem.go:35] "Initializing new in-memory state store" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.299357 4681 manager.go:334] "Starting Device Plugin manager" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.299415 4681 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.299429 4681 server.go:79] "Starting device plugin registration server" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.299851 4681 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.299875 4681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.300129 4681 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.300218 4681 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.300233 4681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.307889 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.351098 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.351206 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.352137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.352174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.352186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.352326 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.352602 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.352657 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353181 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353192 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353408 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353596 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353635 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.353757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354141 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354150 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354256 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354360 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354390 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354399 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354490 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.354525 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355261 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355294 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355305 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355431 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355572 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355696 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.355727 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356297 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356372 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356394 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356626 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.356659 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.357525 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.357553 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.357564 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.400405 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.401252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.401284 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.401293 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.401315 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.401665 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.82:6443: connect: connection refused" node="crc" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.409900 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.82:6443: connect: connection refused" interval="400ms" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421167 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421199 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421221 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421250 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421298 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421356 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421382 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421399 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421441 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421475 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421492 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421507 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421543 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421557 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.421570 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523535 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523686 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523702 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523738 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523758 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523771 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523805 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523824 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523843 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523889 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523905 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.523926 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524587 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524653 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524711 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524738 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524774 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524815 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524818 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524842 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524874 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524895 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524873 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524928 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524935 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524965 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.524985 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.602354 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.604077 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.604116 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.604125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.604142 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.604536 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.82:6443: connect: connection refused" node="crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.682229 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.687058 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.700968 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.712784 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1d7bd83bdec31008d95d6e8b4fa6d6437d031a69521159e2006c213e52f62b07 WatchSource:0}: Error finding container 1d7bd83bdec31008d95d6e8b4fa6d6437d031a69521159e2006c213e52f62b07: Status 404 returned error can't find the container with id 1d7bd83bdec31008d95d6e8b4fa6d6437d031a69521159e2006c213e52f62b07 Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.713291 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ee0d402daea8c02f7b622b52426f3fdd680e26d084ac4d35c33a146906131bd5 WatchSource:0}: Error finding container ee0d402daea8c02f7b622b52426f3fdd680e26d084ac4d35c33a146906131bd5: Status 404 returned error can't find the container with id ee0d402daea8c02f7b622b52426f3fdd680e26d084ac4d35c33a146906131bd5 Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.715088 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-deee5718fc45e033942734c4735b8e42ca6a081e4d25cc7b2bde0e2719764911 WatchSource:0}: Error finding container deee5718fc45e033942734c4735b8e42ca6a081e4d25cc7b2bde0e2719764911: Status 404 returned error can't find the container with id deee5718fc45e033942734c4735b8e42ca6a081e4d25cc7b2bde0e2719764911 Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.716654 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: I1123 06:44:23.721906 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.732863 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-817e951372c2204735b021c7019e26f610279eae586d4e9e76a577d739b4d311 WatchSource:0}: Error finding container 817e951372c2204735b021c7019e26f610279eae586d4e9e76a577d739b4d311: Status 404 returned error can't find the container with id 817e951372c2204735b021c7019e26f610279eae586d4e9e76a577d739b4d311 Nov 23 06:44:23 crc kubenswrapper[4681]: W1123 06:44:23.739197 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ca11cfbcecfface73c420724d6e053c4d91c95c8ba56250f892663432d79744b WatchSource:0}: Error finding container ca11cfbcecfface73c420724d6e053c4d91c95c8ba56250f892663432d79744b: Status 404 returned error can't find the container with id ca11cfbcecfface73c420724d6e053c4d91c95c8ba56250f892663432d79744b Nov 23 06:44:23 crc kubenswrapper[4681]: E1123 06:44:23.810688 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.82:6443: connect: connection refused" interval="800ms" Nov 23 06:44:24 crc kubenswrapper[4681]: W1123 06:44:24.004121 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:24 crc kubenswrapper[4681]: E1123 06:44:24.004247 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.005050 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.006243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.006291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.006302 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.006338 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:44:24 crc kubenswrapper[4681]: E1123 06:44:24.006876 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.82:6443: connect: connection refused" node="crc" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.198401 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.256364 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.256883 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"817e951372c2204735b021c7019e26f610279eae586d4e9e76a577d739b4d311"} Nov 23 06:44:24 crc kubenswrapper[4681]: W1123 06:44:24.260198 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:24 crc kubenswrapper[4681]: E1123 06:44:24.260277 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.260428 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29" exitCode=0 Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.260493 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.260531 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"deee5718fc45e033942734c4735b8e42ca6a081e4d25cc7b2bde0e2719764911"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.260662 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.262361 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.262394 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.262427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.263028 4681 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d6a7533fad82c3bcbde3e6ac81477adb6f58d7e609d19f33d4f02a843d140025" exitCode=0 Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.263084 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d6a7533fad82c3bcbde3e6ac81477adb6f58d7e609d19f33d4f02a843d140025"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.263114 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ee0d402daea8c02f7b622b52426f3fdd680e26d084ac4d35c33a146906131bd5"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.263196 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.263959 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.263983 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.263993 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.264905 4681 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41" exitCode=0 Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.264956 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.264972 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1d7bd83bdec31008d95d6e8b4fa6d6437d031a69521159e2006c213e52f62b07"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.265017 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.266739 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.269673 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.269718 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.269732 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.269743 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.269773 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.269785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.270370 4681 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95" exitCode=0 Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.270398 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.270539 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca11cfbcecfface73c420724d6e053c4d91c95c8ba56250f892663432d79744b"} Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.270556 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.271635 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.271668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.271686 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:24 crc kubenswrapper[4681]: W1123 06:44:24.446395 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:24 crc kubenswrapper[4681]: E1123 06:44:24.446505 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:24 crc kubenswrapper[4681]: E1123 06:44:24.611688 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.82:6443: connect: connection refused" interval="1.6s" Nov 23 06:44:24 crc kubenswrapper[4681]: W1123 06:44:24.784123 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.26.82:6443: connect: connection refused Nov 23 06:44:24 crc kubenswrapper[4681]: E1123 06:44:24.784199 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.26.82:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.807720 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.808763 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.808796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.808804 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:24 crc kubenswrapper[4681]: I1123 06:44:24.808825 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:44:24 crc kubenswrapper[4681]: E1123 06:44:24.809263 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.82:6443: connect: connection refused" node="crc" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.274399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.274453 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.274423 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.274486 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.275209 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.275250 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.275260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.276104 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.276161 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.276172 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.276124 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.276864 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.276899 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.276909 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.279236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.279262 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.279274 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.279291 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.279299 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.279378 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.280040 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.280064 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.280073 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.280388 4681 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1e0c8d0174c5324a7cd0185fcbd7b9c71d0d52bafb06f4270ecf0385b3d940b0" exitCode=0 Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.280442 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1e0c8d0174c5324a7cd0185fcbd7b9c71d0d52bafb06f4270ecf0385b3d940b0"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.280540 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.281084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.281111 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.281122 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.281514 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"10d803964c3c48bbbb674ce8c9ff214415b7f3cb5f545daf2dbe6463c9191e22"} Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.281608 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.282144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.282173 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.282188 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.338990 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.390910 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.910105 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:25 crc kubenswrapper[4681]: I1123 06:44:25.974405 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.063762 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.284891 4681 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="58f1fb8381eab701a7a330e71f9c8ab8839eed5a29529981f530493e0a161b27" exitCode=0 Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.285008 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.285401 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.285417 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"58f1fb8381eab701a7a330e71f9c8ab8839eed5a29529981f530493e0a161b27"} Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.285488 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.285592 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286052 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286087 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286158 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286193 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286527 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286558 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286589 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.286597 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.409363 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.410213 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.410282 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.410295 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:26 crc kubenswrapper[4681]: I1123 06:44:26.410378 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.291651 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.291692 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b26dff5498f97d1c5fd38fbb06e98116b4aebef1e01e7cc8219f5971a3be661e"} Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.291770 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"057db013dba2fc0efcea9f65f573ef19071ae0e073226626ea9d9555385baac8"} Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.291795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0517023994a8ba8b417264325e435b932011e2cc9ea84e476b91facaad23b076"} Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.291807 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"440a385e359fa7c1a682c0ff621441cd19abd5a73e2562e5c9a70772c0a7560f"} Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.291818 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6868c3db9d3a8a708de57a9fba4073d45b2f2b3705d25c74f4e17c2a11918c7c"} Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.291941 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.292065 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.292338 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293021 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293048 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293062 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293128 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293184 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293206 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293234 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293256 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.293196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:27 crc kubenswrapper[4681]: I1123 06:44:27.442941 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.296193 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.297269 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.297302 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.297310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.407731 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.407844 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.408559 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.408587 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.408596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:28 crc kubenswrapper[4681]: I1123 06:44:28.992244 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 23 06:44:29 crc kubenswrapper[4681]: I1123 06:44:29.299246 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:29 crc kubenswrapper[4681]: I1123 06:44:29.300105 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:29 crc kubenswrapper[4681]: I1123 06:44:29.300149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:29 crc kubenswrapper[4681]: I1123 06:44:29.300160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.021410 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.021683 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.022896 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.022925 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.022934 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.161373 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.300763 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.301775 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.301810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:30 crc kubenswrapper[4681]: I1123 06:44:30.301819 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:31 crc kubenswrapper[4681]: I1123 06:44:31.408740 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 06:44:31 crc kubenswrapper[4681]: I1123 06:44:31.409051 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 06:44:33 crc kubenswrapper[4681]: E1123 06:44:33.308509 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 06:44:33 crc kubenswrapper[4681]: I1123 06:44:33.693845 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:33 crc kubenswrapper[4681]: I1123 06:44:33.694007 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:33 crc kubenswrapper[4681]: I1123 06:44:33.695078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:33 crc kubenswrapper[4681]: I1123 06:44:33.695122 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:33 crc kubenswrapper[4681]: I1123 06:44:33.695132 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:33 crc kubenswrapper[4681]: I1123 06:44:33.698241 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:34 crc kubenswrapper[4681]: I1123 06:44:34.308432 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:34 crc kubenswrapper[4681]: I1123 06:44:34.309169 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:34 crc kubenswrapper[4681]: I1123 06:44:34.309200 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:34 crc kubenswrapper[4681]: I1123 06:44:34.309208 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:34 crc kubenswrapper[4681]: I1123 06:44:34.312387 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:35 crc kubenswrapper[4681]: I1123 06:44:35.199345 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 23 06:44:35 crc kubenswrapper[4681]: I1123 06:44:35.310187 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:35 crc kubenswrapper[4681]: I1123 06:44:35.311027 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:35 crc kubenswrapper[4681]: I1123 06:44:35.311053 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:35 crc kubenswrapper[4681]: I1123 06:44:35.311063 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:35 crc kubenswrapper[4681]: E1123 06:44:35.340707 4681 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 23 06:44:36 crc kubenswrapper[4681]: E1123 06:44:36.213347 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Nov 23 06:44:36 crc kubenswrapper[4681]: I1123 06:44:36.316186 4681 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 06:44:36 crc kubenswrapper[4681]: I1123 06:44:36.316271 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 06:44:36 crc kubenswrapper[4681]: I1123 06:44:36.324604 4681 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 06:44:36 crc kubenswrapper[4681]: I1123 06:44:36.324659 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 06:44:37 crc kubenswrapper[4681]: I1123 06:44:37.466911 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 23 06:44:37 crc kubenswrapper[4681]: I1123 06:44:37.467103 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:37 crc kubenswrapper[4681]: I1123 06:44:37.468410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:37 crc kubenswrapper[4681]: I1123 06:44:37.468437 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:37 crc kubenswrapper[4681]: I1123 06:44:37.468445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:37 crc kubenswrapper[4681]: I1123 06:44:37.476694 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 23 06:44:38 crc kubenswrapper[4681]: I1123 06:44:38.318518 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:38 crc kubenswrapper[4681]: I1123 06:44:38.319233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:38 crc kubenswrapper[4681]: I1123 06:44:38.319274 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:38 crc kubenswrapper[4681]: I1123 06:44:38.319283 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:39 crc kubenswrapper[4681]: I1123 06:44:39.717974 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:44:39 crc kubenswrapper[4681]: I1123 06:44:39.730180 4681 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.025772 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.025939 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.027122 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.027159 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.027169 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.032418 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.322779 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.322829 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.323739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.323787 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:40 crc kubenswrapper[4681]: I1123 06:44:40.323796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.337314 4681 trace.go:236] Trace[1085871234]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:44:26.463) (total time: 14873ms): Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[1085871234]: ---"Objects listed" error: 14873ms (06:44:41.337) Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[1085871234]: [14.873826703s] [14.873826703s] END Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.337346 4681 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.337428 4681 trace.go:236] Trace[287804988]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:44:27.561) (total time: 13775ms): Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[287804988]: ---"Objects listed" error: 13775ms (06:44:41.337) Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[287804988]: [13.775641162s] [13.775641162s] END Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.337448 4681 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.338856 4681 trace.go:236] Trace[37463648]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:44:26.395) (total time: 14943ms): Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[37463648]: ---"Objects listed" error: 14943ms (06:44:41.338) Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[37463648]: [14.943794259s] [14.943794259s] END Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.338877 4681 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.338912 4681 trace.go:236] Trace[522946204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:44:27.204) (total time: 14134ms): Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[522946204]: ---"Objects listed" error: 14134ms (06:44:41.338) Nov 23 06:44:41 crc kubenswrapper[4681]: Trace[522946204]: [14.134235216s] [14.134235216s] END Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.338931 4681 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 23 06:44:41 crc kubenswrapper[4681]: E1123 06:44:41.339932 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.340973 4681 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.365326 4681 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.365373 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.365416 4681 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.365489 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.366138 4681 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42740->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.366169 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42740->192.168.126.11:17697: read: connection reset by peer" Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.408381 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 06:44:41 crc kubenswrapper[4681]: I1123 06:44:41.408434 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.200119 4681 apiserver.go:52] "Watching apiserver" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.202775 4681 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.203005 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.203314 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.203480 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.203523 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.203539 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.203596 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.203631 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.203645 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.203755 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.203916 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.205614 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.205744 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.205753 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.206836 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.206835 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.207052 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.206957 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.207269 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.208625 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.209641 4681 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246501 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246553 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246572 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246590 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246606 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246623 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246673 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246689 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246707 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246726 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246741 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246760 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246775 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246800 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246818 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246840 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246854 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246872 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246891 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246907 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246921 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246937 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246959 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246972 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.246985 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247009 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247023 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247040 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247056 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247072 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247088 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247104 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247121 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247135 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247152 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247185 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247200 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247229 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247246 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247260 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247274 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247290 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247303 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247316 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247332 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247347 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247379 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247393 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247413 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247426 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247441 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247454 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247481 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247497 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247511 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247525 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247553 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247574 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247588 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247604 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247618 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247633 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247648 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247665 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247680 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247695 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247710 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247727 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247741 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247754 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247769 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247782 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247798 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247819 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247835 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247849 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247865 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247880 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247894 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247921 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247935 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247950 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247964 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247981 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.247997 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248012 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248027 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248042 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248058 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248072 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248087 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248104 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248122 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248144 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248159 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248174 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248189 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248208 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248234 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248252 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248268 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248284 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248298 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248314 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248330 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248345 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248362 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248377 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248392 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248406 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248421 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248439 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248455 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248482 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248497 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248511 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248527 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248579 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248594 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248609 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248625 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248640 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248654 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248671 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248687 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248701 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248716 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248731 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248766 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248781 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248796 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248812 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248827 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248843 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248864 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248878 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248893 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248924 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248939 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248954 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248969 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.248984 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249003 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249019 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249034 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249050 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249066 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249083 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249098 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249114 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249131 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249148 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249165 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249181 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249198 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249213 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249239 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249256 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249274 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249290 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249306 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249321 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249337 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249354 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249370 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249387 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249403 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249419 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249435 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249450 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249477 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249453 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249496 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249745 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249774 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249863 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249885 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249904 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249920 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249937 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249959 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249975 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.249991 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250007 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250024 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250045 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250062 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250080 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250101 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250124 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250144 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250164 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250181 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250251 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250273 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250294 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250312 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250320 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250342 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250362 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250414 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250434 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250453 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250492 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250517 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250536 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250558 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250656 4681 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.250660 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.251158 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.251282 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.251404 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.251596 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.251678 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:44:42.751661101 +0000 UTC m=+19.821170339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.251822 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.251912 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252071 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252340 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252361 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252541 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252654 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252754 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252840 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252908 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252917 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.252932 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253095 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253152 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253078 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253365 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253366 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253519 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253627 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253689 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253772 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253867 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.253890 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.254037 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.254079 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.254237 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.254268 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.254595 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.254841 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.254903 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.256045 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.256500 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.256747 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.257005 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.257208 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.257624 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.257836 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.257867 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258202 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258312 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258382 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258696 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258825 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258854 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.258906 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259747 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259058 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259068 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259084 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259100 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259135 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259240 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259411 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259427 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259441 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259579 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259619 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259638 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259659 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259806 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259898 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259908 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259952 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259961 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.259970 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260100 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260101 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260282 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260370 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260548 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260562 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260730 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.260837 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.261028 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.261130 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.261389 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.261523 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.261565 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.261863 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.262367 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.262549 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.262671 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.262755 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.262797 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.262815 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.263234 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.263657 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.263758 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.263894 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.264071 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.264146 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.264424 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.264475 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.264707 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.264832 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.264909 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265002 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265125 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265242 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265584 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265589 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265653 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265784 4681 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265899 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265926 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.265930 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.266171 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.266293 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.266326 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.263979 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.269120 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.269326 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.269847 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.269878 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.270032 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.270226 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.270393 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.270641 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.270507 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.270869 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.270989 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:42.770955975 +0000 UTC m=+19.840465212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.271187 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.271439 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.271532 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.272396 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.272570 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:42.772551846 +0000 UTC m=+19.842061083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.272689 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.273010 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.273097 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.273208 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.275734 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.275770 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.276526 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.275681 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.275811 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.276193 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.276280 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.276449 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.276510 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.276527 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.276940 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.278270 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.278558 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.280076 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.280205 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.281154 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.281541 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.282668 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.283116 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.283757 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.284026 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.284186 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.284429 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.284655 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.284684 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.284769 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.285240 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.285722 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.285916 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.286090 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.284175 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.286139 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.286160 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.286236 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:42.786204612 +0000 UTC m=+19.855713849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.286485 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.286562 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.287267 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.288115 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.288327 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.288499 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.288713 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.289022 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.289024 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.290376 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.292308 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.292393 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.292473 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.292569 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:42.792554115 +0000 UTC m=+19.862063352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.294701 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.295017 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.295238 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.298137 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.298314 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.298806 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.298811 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.299015 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.299061 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.299086 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.299354 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.299454 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.299588 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.300239 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.300383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.300626 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.300795 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.300837 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.301075 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.301075 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.301498 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.301516 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.301711 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.301720 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.301795 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.302728 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.308243 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.309860 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.315516 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.316815 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.323224 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.325500 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.327154 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.328648 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.331369 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6" exitCode=255 Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.331451 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6"} Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.334774 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.337909 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.345540 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.345656 4681 scope.go:117] "RemoveContainer" containerID="f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.346126 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351370 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351433 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351519 4681 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351539 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351549 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351559 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351568 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351577 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351586 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351595 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351604 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351613 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351623 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351633 4681 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351645 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351653 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351662 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351671 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351679 4681 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351687 4681 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351695 4681 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351704 4681 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351713 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351722 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351730 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351738 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351746 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351754 4681 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351764 4681 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351773 4681 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351781 4681 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351804 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351812 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351826 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351835 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351842 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351851 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351859 4681 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351867 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351874 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351883 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351892 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351906 4681 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351914 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351922 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351931 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351939 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351948 4681 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351955 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351963 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351971 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351978 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351988 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.351997 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352005 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352014 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352023 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352031 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352039 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352047 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352054 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352065 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352072 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352081 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352094 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352102 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352111 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352126 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352134 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352142 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352160 4681 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352168 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352176 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352183 4681 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352191 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352520 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352565 4681 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352576 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352585 4681 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352626 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352635 4681 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352645 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352654 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352664 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352674 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352683 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352692 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352701 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352711 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352726 4681 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352734 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352742 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352751 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352760 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352768 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352776 4681 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352785 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352793 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352802 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352810 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352818 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352826 4681 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352835 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352843 4681 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352852 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352860 4681 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352868 4681 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352876 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352884 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352892 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352900 4681 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352908 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352918 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352926 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352934 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352955 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352965 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352974 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352983 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.352992 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353000 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353009 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353018 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353028 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353036 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353045 4681 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353053 4681 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353062 4681 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353073 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353082 4681 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353093 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353250 4681 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353414 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353508 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353512 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353523 4681 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353555 4681 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353574 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353588 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353598 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353607 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353678 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353697 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353709 4681 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353722 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353734 4681 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353746 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353759 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353770 4681 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353783 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353796 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353807 4681 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353819 4681 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353832 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353845 4681 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353855 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353869 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353882 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353894 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353905 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353915 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353925 4681 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353937 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353951 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353963 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353974 4681 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353985 4681 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.353994 4681 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354005 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354016 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354027 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354039 4681 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354050 4681 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354061 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354074 4681 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354087 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354100 4681 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354112 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354126 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354136 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354147 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354161 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354174 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354185 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354197 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354209 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354234 4681 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354246 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354257 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354267 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354278 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354289 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354299 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354309 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354323 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354334 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354345 4681 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.354699 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.361890 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.369630 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.377070 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.384240 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.515211 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.520553 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.525395 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:44:42 crc kubenswrapper[4681]: W1123 06:44:42.528527 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-1d499aeabe6db1e902d6a0da4b42c2dd53c72f18d26f9c2a7f5ad51eadc8c303 WatchSource:0}: Error finding container 1d499aeabe6db1e902d6a0da4b42c2dd53c72f18d26f9c2a7f5ad51eadc8c303: Status 404 returned error can't find the container with id 1d499aeabe6db1e902d6a0da4b42c2dd53c72f18d26f9c2a7f5ad51eadc8c303 Nov 23 06:44:42 crc kubenswrapper[4681]: W1123 06:44:42.532592 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9ce7f5e575df071ff22054a6d7888d60cf95aa0a4aba3e9e73604155b1388d9c WatchSource:0}: Error finding container 9ce7f5e575df071ff22054a6d7888d60cf95aa0a4aba3e9e73604155b1388d9c: Status 404 returned error can't find the container with id 9ce7f5e575df071ff22054a6d7888d60cf95aa0a4aba3e9e73604155b1388d9c Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.756981 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.757226 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:44:43.757163833 +0000 UTC m=+20.826673070 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.857800 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.857849 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.857874 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:42 crc kubenswrapper[4681]: I1123 06:44:42.857896 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.857977 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858041 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:43.858025815 +0000 UTC m=+20.927535051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858046 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858085 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858101 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858142 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858165 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:43.858148976 +0000 UTC m=+20.927658213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858170 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858052 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858190 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858205 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:43.858199711 +0000 UTC m=+20.927708948 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:42 crc kubenswrapper[4681]: E1123 06:44:42.858264 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:43.858247691 +0000 UTC m=+20.927756928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.255815 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.256517 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.258450 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.263416 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.264067 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.265280 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.265951 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.266520 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.267592 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.268111 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.268989 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.269061 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.269879 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.271036 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.271653 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.272664 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.273223 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.274195 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.274630 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.275388 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.276502 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.276975 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.277582 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.278564 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.279230 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.280057 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.280705 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.281702 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.282188 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.283240 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.283786 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.284264 4681 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.284378 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.286443 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.286970 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.287889 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.288483 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.289572 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.290268 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.292226 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.292894 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.293995 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.294540 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.295657 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.296295 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.297399 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.297991 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.298227 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.299105 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.299808 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.300993 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.301513 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.302529 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.303048 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.303964 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.304670 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.305170 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.314047 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.335413 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.336944 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d"} Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.337227 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.338764 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c"} Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.338799 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b"} Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.338810 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4288a9d94dcd97018baa3ebdcb1997d329bea78cc5bb6179fbe08abf75beb179"} Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.339571 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9ce7f5e575df071ff22054a6d7888d60cf95aa0a4aba3e9e73604155b1388d9c"} Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.340768 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08"} Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.340808 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1d499aeabe6db1e902d6a0da4b42c2dd53c72f18d26f9c2a7f5ad51eadc8c303"} Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.341713 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.351065 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.357923 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.365869 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.373148 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.380296 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.389927 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.401932 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.411701 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.420184 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.766382 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.766639 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:44:45.766600636 +0000 UTC m=+22.836109874 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.867868 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868055 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.868069 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868159 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:45.86813572 +0000 UTC m=+22.937644956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.868317 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:43 crc kubenswrapper[4681]: I1123 06:44:43.868351 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868494 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868527 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868655 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868677 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868616 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:45.868599028 +0000 UTC m=+22.938108265 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.868740 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:45.86872842 +0000 UTC m=+22.938237658 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.869213 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.869256 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.869274 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:43 crc kubenswrapper[4681]: E1123 06:44:43.869347 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:45.869331311 +0000 UTC m=+22.938840548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.250787 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.250935 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.251068 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.251068 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.251189 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.251274 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.540491 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.541899 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.541929 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.541937 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.541991 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.548329 4681 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.548590 4681 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.549500 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.549529 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.549537 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.549550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.549560 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.565288 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.571527 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.571585 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.571596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.571618 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.571630 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.583315 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.586300 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.586347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.586357 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.586375 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.586385 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.597132 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.599714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.599747 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.599757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.599774 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.599783 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.608612 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.611317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.611346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.611354 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.611365 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.611376 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.619808 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:44 crc kubenswrapper[4681]: E1123 06:44:44.619913 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.620961 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.620981 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.620989 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.621002 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.621010 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.723794 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.723847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.723856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.723876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.723888 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.825612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.825650 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.825659 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.825676 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.825685 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.928033 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.928086 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.928098 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.928119 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:44 crc kubenswrapper[4681]: I1123 06:44:44.928135 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:44Z","lastTransitionTime":"2025-11-23T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.030922 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.030965 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.030976 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.030996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.031009 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.133129 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.133181 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.133201 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.133220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.133232 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.236034 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.236068 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.236080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.236100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.236113 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.338097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.338148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.338160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.338177 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.338201 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.346153 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.376715 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.391245 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.403346 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.413991 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.423620 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.433389 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.440404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.440444 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.440455 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.440488 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.440501 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.442789 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.543077 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.543125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.543137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.543157 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.543170 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.644951 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.644990 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.644998 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.645012 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.645022 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.747744 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.747790 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.747800 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.747817 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.747827 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.784029 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.784163 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:44:49.784146754 +0000 UTC m=+26.853655991 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.849604 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.849664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.849674 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.849691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.849704 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.884994 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.885041 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.885068 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.885088 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885215 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885226 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885249 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885283 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885296 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885305 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:49.88528847 +0000 UTC m=+26.954797708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885346 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:49.885331521 +0000 UTC m=+26.954840758 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885391 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885432 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885451 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885554 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:49.885530294 +0000 UTC m=+26.955039541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:45 crc kubenswrapper[4681]: E1123 06:44:45.885710 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:49.885689282 +0000 UTC m=+26.955198519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.952024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.952060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.952069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.952084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:45 crc kubenswrapper[4681]: I1123 06:44:45.952093 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:45Z","lastTransitionTime":"2025-11-23T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.054486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.054713 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.054781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.054844 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.054896 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.157162 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.157368 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.157432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.157527 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.157604 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.251386 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.251414 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:46 crc kubenswrapper[4681]: E1123 06:44:46.251583 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.251675 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:46 crc kubenswrapper[4681]: E1123 06:44:46.251739 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:46 crc kubenswrapper[4681]: E1123 06:44:46.251903 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.260370 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.260405 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.260418 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.260433 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.260444 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.362422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.362473 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.362483 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.362496 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.362506 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.464711 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.464764 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.464777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.464793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.464808 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.566628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.566671 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.566681 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.566698 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.566709 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.669387 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.669432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.669444 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.669481 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.669492 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.771985 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.772041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.772050 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.772069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.772081 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.873929 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.873976 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.873987 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.874003 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.874015 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.976065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.976095 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.976104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.976117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:46 crc kubenswrapper[4681]: I1123 06:44:46.976128 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:46Z","lastTransitionTime":"2025-11-23T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.077840 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.077865 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.077873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.077884 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.077892 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.179636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.179668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.179677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.179688 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.179697 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.281582 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.281606 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.281631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.281644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.281653 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.383608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.383641 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.383652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.383665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.383674 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.485848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.485895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.485906 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.485923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.485934 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.588079 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.588115 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.588124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.588140 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.588149 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.691110 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.691140 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.691149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.691178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.691190 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.793944 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.793975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.793984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.793999 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.794007 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.821512 4681 csr.go:261] certificate signing request csr-2c2n5 is approved, waiting to be issued Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.836328 4681 csr.go:257] certificate signing request csr-2c2n5 is issued Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.896863 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.896909 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.896919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.896941 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.896954 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.999452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.999522 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.999532 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.999555 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:47 crc kubenswrapper[4681]: I1123 06:44:47.999567 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:47Z","lastTransitionTime":"2025-11-23T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.101692 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.101748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.101758 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.101775 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.101785 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.204679 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.204723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.204735 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.204758 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.204770 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.251355 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.251395 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.251495 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:48 crc kubenswrapper[4681]: E1123 06:44:48.251526 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:48 crc kubenswrapper[4681]: E1123 06:44:48.251640 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:48 crc kubenswrapper[4681]: E1123 06:44:48.251717 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.307228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.307265 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.307275 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.307291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.307302 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.409570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.409609 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.409619 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.409636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.409647 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.411889 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.416572 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.425810 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.447543 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.464820 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.503363 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.511251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.511278 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.511286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.511300 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.511309 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.516424 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.519383 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.534813 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.548341 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.563805 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.572351 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.587349 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.598197 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.610927 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.613557 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.613593 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.613604 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.613619 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.613630 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.639297 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wh4gt"] Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.639689 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.640783 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-l7wvz"] Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.641394 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.641549 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: W1123 06:44:48.643650 4681 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 23 06:44:48 crc kubenswrapper[4681]: E1123 06:44:48.643697 4681 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 06:44:48 crc kubenswrapper[4681]: W1123 06:44:48.643788 4681 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 23 06:44:48 crc kubenswrapper[4681]: E1123 06:44:48.643810 4681 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 06:44:48 crc kubenswrapper[4681]: W1123 06:44:48.643794 4681 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 23 06:44:48 crc kubenswrapper[4681]: E1123 06:44:48.643833 4681 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.643849 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.643943 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.643999 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.644229 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.644418 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.663173 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.681130 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.698013 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.710682 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpnbz\" (UniqueName: \"kubernetes.io/projected/539dc58c-e752-43c8-bdef-af87528b76f3-kube-api-access-jpnbz\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.710713 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrq5v\" (UniqueName: \"kubernetes.io/projected/095e645f-7b07-4702-87f0-f3b9a6197d9f-kube-api-access-nrq5v\") pod \"node-resolver-l7wvz\" (UID: \"095e645f-7b07-4702-87f0-f3b9a6197d9f\") " pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.710739 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/539dc58c-e752-43c8-bdef-af87528b76f3-rootfs\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.710766 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/095e645f-7b07-4702-87f0-f3b9a6197d9f-hosts-file\") pod \"node-resolver-l7wvz\" (UID: \"095e645f-7b07-4702-87f0-f3b9a6197d9f\") " pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.710882 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/539dc58c-e752-43c8-bdef-af87528b76f3-proxy-tls\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.710921 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/539dc58c-e752-43c8-bdef-af87528b76f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.716229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.716260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.716270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.716284 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.716292 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.716864 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.732245 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.753558 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.761948 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.771691 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.782342 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.792848 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.801991 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811233 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpnbz\" (UniqueName: \"kubernetes.io/projected/539dc58c-e752-43c8-bdef-af87528b76f3-kube-api-access-jpnbz\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811270 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrq5v\" (UniqueName: \"kubernetes.io/projected/095e645f-7b07-4702-87f0-f3b9a6197d9f-kube-api-access-nrq5v\") pod \"node-resolver-l7wvz\" (UID: \"095e645f-7b07-4702-87f0-f3b9a6197d9f\") " pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811303 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/539dc58c-e752-43c8-bdef-af87528b76f3-rootfs\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811322 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/095e645f-7b07-4702-87f0-f3b9a6197d9f-hosts-file\") pod \"node-resolver-l7wvz\" (UID: \"095e645f-7b07-4702-87f0-f3b9a6197d9f\") " pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811336 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/539dc58c-e752-43c8-bdef-af87528b76f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/539dc58c-e752-43c8-bdef-af87528b76f3-proxy-tls\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811606 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/539dc58c-e752-43c8-bdef-af87528b76f3-rootfs\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811626 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/095e645f-7b07-4702-87f0-f3b9a6197d9f-hosts-file\") pod \"node-resolver-l7wvz\" (UID: \"095e645f-7b07-4702-87f0-f3b9a6197d9f\") " pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.811675 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.812201 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/539dc58c-e752-43c8-bdef-af87528b76f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.816233 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/539dc58c-e752-43c8-bdef-af87528b76f3-proxy-tls\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.819267 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.819299 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.819310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.819349 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.819361 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.837787 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-23 06:39:47 +0000 UTC, rotation deadline is 2026-09-02 09:45:44.699965906 +0000 UTC Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.837832 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6795h0m55.862137186s for next certificate rotation Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.920938 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.920988 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.921000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.921018 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:48 crc kubenswrapper[4681]: I1123 06:44:48.921028 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:48Z","lastTransitionTime":"2025-11-23T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.022290 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l6bqb"] Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.023641 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.028595 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.028664 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.028934 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.028977 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.029085 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.029292 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2lhx5"] Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.029379 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.029418 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.029432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.029456 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.029503 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.032448 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qgr2n"] Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.032594 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.032835 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.033324 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.033444 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.035719 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.035915 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.036262 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.039556 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.039697 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.039942 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.040732 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.043980 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.052244 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.061135 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.070344 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.083508 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.091279 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.098640 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.107508 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113392 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovn-node-metrics-cert\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113422 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-cnibin\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113441 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-node-log\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113474 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-etc-kubernetes\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113494 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-netd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113510 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/83e4c166-3ace-4773-86cd-fe2bdd216426-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113525 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-k8s-cni-cncf-io\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113551 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113567 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-bin\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113583 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-cni-binary-copy\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113599 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-cni-multus\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113617 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-conf-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113642 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-netns\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113658 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-config\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113672 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-systemd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113688 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-env-overrides\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113705 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4c7\" (UniqueName: \"kubernetes.io/projected/83e4c166-3ace-4773-86cd-fe2bdd216426-kube-api-access-rn4c7\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113721 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-daemon-config\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113737 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-cnibin\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113754 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/83e4c166-3ace-4773-86cd-fe2bdd216426-cni-binary-copy\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113767 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-os-release\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113782 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-slash\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113799 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-ovn\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113813 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbfd\" (UniqueName: \"kubernetes.io/projected/1abfb530-b7ac-4724-8e43-d87ef92f1949-kube-api-access-vcbfd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113826 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-multus-certs\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113842 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8k44\" (UniqueName: \"kubernetes.io/projected/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-kube-api-access-d8k44\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113879 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-kubelet\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113892 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-etc-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113908 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-log-socket\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113921 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-os-release\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113945 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-var-lib-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113960 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-socket-dir-parent\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113974 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-netns\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.113988 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-cni-bin\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114001 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-system-cni-dir\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114016 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-script-lib\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114030 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114043 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-hostroot\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114065 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-system-cni-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114096 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-cni-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114111 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-kubelet\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114124 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-systemd-units\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.114139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.116318 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.124995 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.134350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.134390 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.134404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.134424 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.134435 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.135291 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.144178 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.151836 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.164698 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.174629 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.189005 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.213488 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.214731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-netns\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.214768 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-config\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.214807 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-daemon-config\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.214859 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-systemd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.214883 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-env-overrides\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215837 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4c7\" (UniqueName: \"kubernetes.io/projected/83e4c166-3ace-4773-86cd-fe2bdd216426-kube-api-access-rn4c7\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215871 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-slash\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215896 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-cnibin\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215921 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/83e4c166-3ace-4773-86cd-fe2bdd216426-cni-binary-copy\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215944 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-os-release\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215964 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8k44\" (UniqueName: \"kubernetes.io/projected/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-kube-api-access-d8k44\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216009 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-ovn\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216031 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcbfd\" (UniqueName: \"kubernetes.io/projected/1abfb530-b7ac-4724-8e43-d87ef92f1949-kube-api-access-vcbfd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216052 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-multus-certs\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216071 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-os-release\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216103 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-kubelet\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216125 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-etc-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216146 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-log-socket\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216190 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-var-lib-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216212 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-socket-dir-parent\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216232 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-netns\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216248 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-config\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216294 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-cni-bin\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216258 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-cni-bin\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.214905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-netns\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-script-lib\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216388 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-system-cni-dir\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216409 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216426 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-hostroot\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216477 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216497 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-systemd-units\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216515 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216533 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-system-cni-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216550 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-cni-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216569 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-kubelet\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216591 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovn-node-metrics-cert\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216614 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-cnibin\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216621 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-slash\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215545 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-env-overrides\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216644 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-node-log\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216672 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-node-log\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216694 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-etc-kubernetes\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216726 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-netd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216761 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/83e4c166-3ace-4773-86cd-fe2bdd216426-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216783 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-k8s-cni-cncf-io\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216802 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-conf-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216824 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216842 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-bin\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216863 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-cni-binary-copy\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216884 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-cni-multus\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-cni-multus\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216973 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-var-lib-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.216998 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-log-socket\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217019 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-cnibin\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217047 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-socket-dir-parent\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.215790 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-daemon-config\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217084 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-netns\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217113 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-etc-kubernetes\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217140 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-netd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217151 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-script-lib\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217205 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-system-cni-dir\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217410 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-hostroot\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217454 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-var-lib-kubelet\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.214964 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-systemd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217503 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217525 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-systemd-units\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217514 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-bin\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217580 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-conf-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-k8s-cni-cncf-io\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217673 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-kubelet\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217758 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217758 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/83e4c166-3ace-4773-86cd-fe2bdd216426-cni-binary-copy\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217782 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-ovn\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217808 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-host-run-multus-certs\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217827 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-os-release\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217840 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-cnibin\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217854 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-os-release\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217871 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-system-cni-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217901 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-etc-openvswitch\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.217911 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-multus-cni-dir\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.218054 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-cni-binary-copy\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.218174 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/83e4c166-3ace-4773-86cd-fe2bdd216426-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.219405 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/83e4c166-3ace-4773-86cd-fe2bdd216426-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.224624 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovn-node-metrics-cert\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.238312 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8k44\" (UniqueName: \"kubernetes.io/projected/4094b291-8b0b-43c0-96e9-f08a9ef53c8b-kube-api-access-d8k44\") pod \"multus-2lhx5\" (UID: \"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\") " pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.239680 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.240002 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcbfd\" (UniqueName: \"kubernetes.io/projected/1abfb530-b7ac-4724-8e43-d87ef92f1949-kube-api-access-vcbfd\") pod \"ovnkube-node-l6bqb\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.240715 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4c7\" (UniqueName: \"kubernetes.io/projected/83e4c166-3ace-4773-86cd-fe2bdd216426-kube-api-access-rn4c7\") pod \"multus-additional-cni-plugins-qgr2n\" (UID: \"83e4c166-3ace-4773-86cd-fe2bdd216426\") " pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.245308 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.245341 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.245353 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.245374 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.245388 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.264788 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.283776 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.297115 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.307737 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.317078 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.331022 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:49Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.339543 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.346286 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.348428 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.348475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.348485 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.348504 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.348515 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.350598 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2lhx5" Nov 23 06:44:49 crc kubenswrapper[4681]: W1123 06:44:49.354144 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abfb530_b7ac_4724_8e43_d87ef92f1949.slice/crio-4f8e447722bd3f219f03be4cbc14a7478fe37b3257379cd2dadcc737c8283ec6 WatchSource:0}: Error finding container 4f8e447722bd3f219f03be4cbc14a7478fe37b3257379cd2dadcc737c8283ec6: Status 404 returned error can't find the container with id 4f8e447722bd3f219f03be4cbc14a7478fe37b3257379cd2dadcc737c8283ec6 Nov 23 06:44:49 crc kubenswrapper[4681]: W1123 06:44:49.359532 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83e4c166_3ace_4773_86cd_fe2bdd216426.slice/crio-90c233986e39040d09d476ae9a76e4e033b4661fde16adc5c0d72e70d735b42e WatchSource:0}: Error finding container 90c233986e39040d09d476ae9a76e4e033b4661fde16adc5c0d72e70d735b42e: Status 404 returned error can't find the container with id 90c233986e39040d09d476ae9a76e4e033b4661fde16adc5c0d72e70d735b42e Nov 23 06:44:49 crc kubenswrapper[4681]: W1123 06:44:49.362231 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4094b291_8b0b_43c0_96e9_f08a9ef53c8b.slice/crio-cec64645bf762c31a672f47fb0449c18302c1b1b7091e0b05538101a3532d299 WatchSource:0}: Error finding container cec64645bf762c31a672f47fb0449c18302c1b1b7091e0b05538101a3532d299: Status 404 returned error can't find the container with id cec64645bf762c31a672f47fb0449c18302c1b1b7091e0b05538101a3532d299 Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.451606 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.451657 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.451668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.451687 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.451699 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.553616 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.553663 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.553675 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.553694 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.553964 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.580068 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.586131 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpnbz\" (UniqueName: \"kubernetes.io/projected/539dc58c-e752-43c8-bdef-af87528b76f3-kube-api-access-jpnbz\") pod \"machine-config-daemon-wh4gt\" (UID: \"539dc58c-e752-43c8-bdef-af87528b76f3\") " pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.656481 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.656512 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.656523 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.656542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.656552 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.753581 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.758116 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrq5v\" (UniqueName: \"kubernetes.io/projected/095e645f-7b07-4702-87f0-f3b9a6197d9f-kube-api-access-nrq5v\") pod \"node-resolver-l7wvz\" (UID: \"095e645f-7b07-4702-87f0-f3b9a6197d9f\") " pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.758618 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.758651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.758660 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.758676 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.758685 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.822269 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.822423 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:44:57.822401789 +0000 UTC m=+34.891911027 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.851296 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.860611 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.860640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.860649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.860663 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.860673 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:49 crc kubenswrapper[4681]: W1123 06:44:49.862512 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod539dc58c_e752_43c8_bdef_af87528b76f3.slice/crio-0a182977175be46bd5f0971882ae0da60c624e841587f07c2231210ea4524ef6 WatchSource:0}: Error finding container 0a182977175be46bd5f0971882ae0da60c624e841587f07c2231210ea4524ef6: Status 404 returned error can't find the container with id 0a182977175be46bd5f0971882ae0da60c624e841587f07c2231210ea4524ef6 Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.923542 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.923573 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.923608 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.923630 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.923710 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.923774 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:57.923761023 +0000 UTC m=+34.993270260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924174 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924212 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:57.92420264 +0000 UTC m=+34.993711878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924358 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924434 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924507 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924561 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924582 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924529 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924671 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:57.924635793 +0000 UTC m=+34.994145029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:49 crc kubenswrapper[4681]: E1123 06:44:49.924697 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:44:57.92468804 +0000 UTC m=+34.994197278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.962951 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.962987 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.963015 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.963034 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:49 crc kubenswrapper[4681]: I1123 06:44:49.963046 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:49Z","lastTransitionTime":"2025-11-23T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.036015 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.037601 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l7wvz" Nov 23 06:44:50 crc kubenswrapper[4681]: W1123 06:44:50.048848 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod095e645f_7b07_4702_87f0_f3b9a6197d9f.slice/crio-398a3295d0e5117f43f3ba3407837a9056180b23584af6596c63bc77d395b810 WatchSource:0}: Error finding container 398a3295d0e5117f43f3ba3407837a9056180b23584af6596c63bc77d395b810: Status 404 returned error can't find the container with id 398a3295d0e5117f43f3ba3407837a9056180b23584af6596c63bc77d395b810 Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.065772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.065823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.065833 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.065856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.065867 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.168616 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.168677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.168688 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.168722 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.168733 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.251299 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.251361 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.251305 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:50 crc kubenswrapper[4681]: E1123 06:44:50.251437 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:50 crc kubenswrapper[4681]: E1123 06:44:50.251565 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:50 crc kubenswrapper[4681]: E1123 06:44:50.251628 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.271384 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.271419 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.271429 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.271447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.271480 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.361250 4681 generic.go:334] "Generic (PLEG): container finished" podID="83e4c166-3ace-4773-86cd-fe2bdd216426" containerID="801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708" exitCode=0 Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.361330 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerDied","Data":"801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.361373 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerStarted","Data":"90c233986e39040d09d476ae9a76e4e033b4661fde16adc5c0d72e70d735b42e"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.363164 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l7wvz" event={"ID":"095e645f-7b07-4702-87f0-f3b9a6197d9f","Type":"ContainerStarted","Data":"730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.363200 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l7wvz" event={"ID":"095e645f-7b07-4702-87f0-f3b9a6197d9f","Type":"ContainerStarted","Data":"398a3295d0e5117f43f3ba3407837a9056180b23584af6596c63bc77d395b810"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.366165 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.366210 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.366225 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"0a182977175be46bd5f0971882ae0da60c624e841587f07c2231210ea4524ef6"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.367692 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7" exitCode=0 Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.367788 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.367856 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"4f8e447722bd3f219f03be4cbc14a7478fe37b3257379cd2dadcc737c8283ec6"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.369667 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerStarted","Data":"c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.369704 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerStarted","Data":"cec64645bf762c31a672f47fb0449c18302c1b1b7091e0b05538101a3532d299"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.373874 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.373919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.373933 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.373947 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.373959 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.383353 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.397874 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.409896 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.421768 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.434593 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.445839 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.460007 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.469103 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.478383 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.478419 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.478429 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.478522 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.478537 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.479131 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.494429 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.505561 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.515955 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.529883 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.538776 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.546790 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.556494 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.565559 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.574420 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.581543 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.581706 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.581718 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.581738 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.581750 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.586780 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.596370 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.611782 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.621761 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.632611 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.641584 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.652055 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.666824 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.685401 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.685447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.685476 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.685503 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.685517 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.789402 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.789448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.789495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.789515 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.789528 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.892445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.892504 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.892515 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.892534 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.892545 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.995390 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.995627 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.995638 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.995652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:50 crc kubenswrapper[4681]: I1123 06:44:50.995664 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:50Z","lastTransitionTime":"2025-11-23T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.097962 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.097999 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.098009 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.098030 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.098041 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.201298 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.201502 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.201628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.201697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.201758 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.304254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.304708 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.304723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.304748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.304760 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.376147 4681 generic.go:334] "Generic (PLEG): container finished" podID="83e4c166-3ace-4773-86cd-fe2bdd216426" containerID="89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8" exitCode=0 Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.376218 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerDied","Data":"89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.386077 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.386152 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.386169 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.386180 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.386192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.386203 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.398203 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.406558 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.406595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.406613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.406632 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.406643 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.411271 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.425494 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.437056 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.448109 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.458546 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.474985 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.484677 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.494212 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.505427 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.509019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.509057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.509067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.509090 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.509102 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.516499 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.526674 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.536574 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.611236 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.611282 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.611294 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.611316 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.611339 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.714150 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.714192 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.714205 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.714220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.714232 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.816859 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.816897 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.816909 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.816924 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.816942 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.919273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.919316 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.919327 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.919347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:51 crc kubenswrapper[4681]: I1123 06:44:51.919359 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:51Z","lastTransitionTime":"2025-11-23T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.021893 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.021930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.021940 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.021954 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.021965 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.125716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.125761 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.125770 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.125791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.125801 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.228484 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.228534 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.228547 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.228567 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.228579 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.250816 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.250847 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:52 crc kubenswrapper[4681]: E1123 06:44:52.250939 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.250995 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:52 crc kubenswrapper[4681]: E1123 06:44:52.251112 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:52 crc kubenswrapper[4681]: E1123 06:44:52.251191 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.293860 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jcxvt"] Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.294573 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.296794 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.297594 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.298341 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.298660 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.307439 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.318802 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.327488 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.335040 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.335069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.335080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.335099 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.335115 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.343722 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.348269 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3d8b960e-690a-4772-8373-bce89d00cb17-serviceca\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.348324 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d8b960e-690a-4772-8373-bce89d00cb17-host\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.348357 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2d22\" (UniqueName: \"kubernetes.io/projected/3d8b960e-690a-4772-8373-bce89d00cb17-kube-api-access-n2d22\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.356546 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.365633 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.375981 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.387005 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.392234 4681 generic.go:334] "Generic (PLEG): container finished" podID="83e4c166-3ace-4773-86cd-fe2bdd216426" containerID="5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33" exitCode=0 Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.392292 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerDied","Data":"5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.401301 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.412712 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.424234 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.434311 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.437935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.437970 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.437979 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.437993 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.438003 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.443042 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.449623 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3d8b960e-690a-4772-8373-bce89d00cb17-serviceca\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.449662 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d8b960e-690a-4772-8373-bce89d00cb17-host\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.449684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2d22\" (UniqueName: \"kubernetes.io/projected/3d8b960e-690a-4772-8373-bce89d00cb17-kube-api-access-n2d22\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.449949 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d8b960e-690a-4772-8373-bce89d00cb17-host\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.450737 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3d8b960e-690a-4772-8373-bce89d00cb17-serviceca\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.457346 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.469847 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.470003 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2d22\" (UniqueName: \"kubernetes.io/projected/3d8b960e-690a-4772-8373-bce89d00cb17-kube-api-access-n2d22\") pod \"node-ca-jcxvt\" (UID: \"3d8b960e-690a-4772-8373-bce89d00cb17\") " pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.478642 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.488990 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.499806 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.508985 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.519363 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.533454 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.540930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.540961 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.540972 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.540991 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.541003 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.542968 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.551234 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.560771 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.569944 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.579001 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.588761 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.597352 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.607687 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jcxvt" Nov 23 06:44:52 crc kubenswrapper[4681]: W1123 06:44:52.642886 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d8b960e_690a_4772_8373_bce89d00cb17.slice/crio-b89f876db14484ee2fc05c189047156ea89034c2706daa6b19248cc4cb908181 WatchSource:0}: Error finding container b89f876db14484ee2fc05c189047156ea89034c2706daa6b19248cc4cb908181: Status 404 returned error can't find the container with id b89f876db14484ee2fc05c189047156ea89034c2706daa6b19248cc4cb908181 Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.644728 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.644819 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.644883 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.644958 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.645014 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.748526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.748836 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.748847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.748863 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.749108 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.851418 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.851455 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.851490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.851508 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.851517 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.953671 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.953710 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.953719 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.953733 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:52 crc kubenswrapper[4681]: I1123 06:44:52.953743 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:52Z","lastTransitionTime":"2025-11-23T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.056823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.056870 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.056885 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.056908 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.056925 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.128580 4681 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 23 06:44:53 crc kubenswrapper[4681]: W1123 06:44:53.130321 4681 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Nov 23 06:44:53 crc kubenswrapper[4681]: W1123 06:44:53.131278 4681 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 23 06:44:53 crc kubenswrapper[4681]: W1123 06:44:53.132378 4681 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Nov 23 06:44:53 crc kubenswrapper[4681]: W1123 06:44:53.133035 4681 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.159267 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.159422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.159444 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.159486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.159501 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.261596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.261637 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.261647 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.261665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.261679 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.266455 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.277383 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.287391 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.297281 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.305819 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.318785 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.328289 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.339117 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.351348 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.364097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.364291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.364363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.364451 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.364555 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.364732 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.375197 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.391349 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.401942 4681 generic.go:334] "Generic (PLEG): container finished" podID="83e4c166-3ace-4773-86cd-fe2bdd216426" containerID="cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806" exitCode=0 Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.402012 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerDied","Data":"cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.403082 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.413176 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.415527 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jcxvt" event={"ID":"3d8b960e-690a-4772-8373-bce89d00cb17","Type":"ContainerStarted","Data":"ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.415631 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jcxvt" event={"ID":"3d8b960e-690a-4772-8373-bce89d00cb17","Type":"ContainerStarted","Data":"b89f876db14484ee2fc05c189047156ea89034c2706daa6b19248cc4cb908181"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.417747 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.430399 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.439229 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.451826 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.462923 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.472384 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.472446 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.472474 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.472501 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.472515 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.472827 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.484615 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.493528 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.502923 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.511682 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.522625 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.535428 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.544949 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.557261 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.571562 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.575503 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.575537 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.575549 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.575570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.575583 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.677754 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.678082 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.678163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.678235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.678290 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.780230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.781008 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.781036 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.781058 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.781070 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.883526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.883554 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.883564 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.883576 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.883585 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.985448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.985731 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.985742 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.985757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:53 crc kubenswrapper[4681]: I1123 06:44:53.985766 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:53Z","lastTransitionTime":"2025-11-23T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.087573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.087601 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.087608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.087620 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.087629 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.159911 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.164235 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.189111 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.189140 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.189150 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.189161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.189169 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.250986 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:54 crc kubenswrapper[4681]: E1123 06:44:54.251061 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.251597 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:54 crc kubenswrapper[4681]: E1123 06:44:54.251649 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.251683 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:54 crc kubenswrapper[4681]: E1123 06:44:54.251718 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.290196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.290215 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.290223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.290232 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.290240 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.379230 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.393245 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.393265 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.393272 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.393281 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.393288 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.419206 4681 generic.go:334] "Generic (PLEG): container finished" podID="83e4c166-3ace-4773-86cd-fe2bdd216426" containerID="add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780" exitCode=0 Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.419237 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerDied","Data":"add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.437444 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.445390 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.456811 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.464447 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.474839 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.481279 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.495276 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.495649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.495669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.495677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.495693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.495702 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.504531 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.514124 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.524988 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.536642 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.547597 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.557650 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.572097 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.597738 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.597772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.597781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.597797 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.597807 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.643586 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.699776 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.699805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.699815 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.699830 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.699840 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.801716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.801756 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.801767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.801787 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.801797 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.903607 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.903639 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.903646 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.903662 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.903671 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.982931 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.983076 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.983203 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.983312 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.983648 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:54 crc kubenswrapper[4681]: E1123 06:44:54.993717 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.997095 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.997139 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.997150 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.997166 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:54 crc kubenswrapper[4681]: I1123 06:44:54.997177 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:54Z","lastTransitionTime":"2025-11-23T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: E1123 06:44:55.006025 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.009144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.009230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.009286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.009353 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.009412 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: E1123 06:44:55.018899 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.022513 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.022554 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.022563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.022575 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.022585 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: E1123 06:44:55.034766 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.038648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.038732 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.038793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.038856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.038909 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: E1123 06:44:55.047832 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: E1123 06:44:55.047970 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.050236 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.050398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.050490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.050550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.050600 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.152977 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.153016 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.153030 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.153050 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.153064 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.254608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.254638 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.254652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.254664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.254678 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.358795 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.358841 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.358851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.358865 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.358876 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.395680 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.406309 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.416750 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.425477 4681 generic.go:334] "Generic (PLEG): container finished" podID="83e4c166-3ace-4773-86cd-fe2bdd216426" containerID="79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2" exitCode=0 Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.425524 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerDied","Data":"79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.426988 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.437326 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.448182 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.458351 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.462159 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.462195 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.462205 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.462219 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.462228 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.465854 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.476295 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.487390 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.496875 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.506776 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.515487 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.524176 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.535828 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.544707 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.554831 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.563689 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.563712 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.563720 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.563732 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.563740 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.566024 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.573530 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.584196 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.594348 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.602777 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.612666 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.622866 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.632877 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.643705 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.656612 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.665576 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.666067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.666124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.666134 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.666152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.666162 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.683977 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.768620 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.768658 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.768666 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.768681 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.768691 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.871171 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.871208 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.871216 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.871228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.871238 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.973685 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.973715 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.973724 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.973736 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:55 crc kubenswrapper[4681]: I1123 06:44:55.973745 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:55Z","lastTransitionTime":"2025-11-23T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.075812 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.075842 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.075851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.075865 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.075873 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.177451 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.177492 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.177500 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.177509 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.177517 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.251508 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:56 crc kubenswrapper[4681]: E1123 06:44:56.251601 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.251509 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.251520 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:56 crc kubenswrapper[4681]: E1123 06:44:56.251693 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:56 crc kubenswrapper[4681]: E1123 06:44:56.251768 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.279640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.279667 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.279676 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.279702 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.279710 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.381331 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.381358 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.381368 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.381378 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.381386 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.429399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" event={"ID":"83e4c166-3ace-4773-86cd-fe2bdd216426","Type":"ContainerStarted","Data":"039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.432144 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.432306 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.432321 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.438439 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.446170 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.449020 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.449111 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.455523 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.463778 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.470912 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.478297 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.483330 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.483353 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.483361 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.483377 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.483386 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.485242 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.493490 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.501921 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.510475 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.519737 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.527767 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.536381 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.548878 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.556668 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.566317 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.574197 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.582494 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.584923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.584951 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.584960 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.584975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.584984 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.595814 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.607253 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.615296 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.623061 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.629983 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.638005 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.644156 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.652558 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.660664 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.668904 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.687733 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.687757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.687766 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.687781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.687792 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.789778 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.789807 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.789816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.789828 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.789835 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.891809 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.891837 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.891847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.891858 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.891884 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.993683 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.993721 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.993730 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.993744 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:56 crc kubenswrapper[4681]: I1123 06:44:56.993752 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:56Z","lastTransitionTime":"2025-11-23T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.095264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.095296 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.095306 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.095320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.095329 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.197434 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.197483 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.197493 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.197508 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.197517 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.299608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.299640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.299684 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.299701 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.299709 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.402211 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.402495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.402505 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.402523 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.402552 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.434377 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.504441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.504514 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.504524 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.504540 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.504552 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.606009 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.606033 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.606042 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.606053 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.606061 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.707970 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.707992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.708001 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.708011 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.708019 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.810027 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.810263 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.810323 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.810380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.810431 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.903630 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:44:57 crc kubenswrapper[4681]: E1123 06:44:57.903827 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:45:13.903810744 +0000 UTC m=+50.973319982 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.912774 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.912807 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.912818 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.912835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:57 crc kubenswrapper[4681]: I1123 06:44:57.912845 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:57Z","lastTransitionTime":"2025-11-23T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.004380 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.004417 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.004433 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.004449 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004543 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004584 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:14.004571826 +0000 UTC m=+51.074081062 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004584 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004605 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004633 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004665 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:14.004655482 +0000 UTC m=+51.074164720 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004715 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004741 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:14.00473397 +0000 UTC m=+51.074243207 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004780 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004788 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004795 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.004811 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:14.004806576 +0000 UTC m=+51.074315813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.014066 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.014096 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.014104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.014114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.014124 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.115589 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.115623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.115630 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.115655 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.115663 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.218154 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.218186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.218196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.218208 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.218217 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.250912 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.250934 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.250950 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.251015 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.251106 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:44:58 crc kubenswrapper[4681]: E1123 06:44:58.251168 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.320128 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.320152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.320160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.320170 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.320178 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.422142 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.422171 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.422179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.422191 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.422201 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.437705 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/0.log" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.439608 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb" exitCode=1 Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.439645 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.440116 4681 scope.go:117] "RemoveContainer" containerID="fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.455567 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.469649 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.479959 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.488232 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.494816 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.503373 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.510968 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.519386 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.523414 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.523443 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.523452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.523479 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.523490 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.530358 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.539200 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.548199 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.561521 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:57Z\\\",\\\"message\\\":\\\" 5912 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:44:57.419973 5912 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:44:57.420002 5912 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:44:57.420022 5912 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:44:57.420061 5912 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 06:44:57.420082 5912 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1123 06:44:57.420125 5912 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 06:44:57.420156 5912 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:44:57.420178 5912 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:44:57.420196 5912 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 06:44:57.420244 5912 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 06:44:57.420272 5912 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:44:57.420312 5912 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:44:57.420348 5912 factory.go:656] Stopping watch factory\\\\nI1123 06:44:57.420372 5912 ovnkube.go:599] Stopped ovnkube\\\\nI1123 06:44:57.420385 5912 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.569411 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.577617 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.625432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.625488 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.625498 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.625514 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.625524 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.727755 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.727798 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.727808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.727822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.727831 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.829825 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.829860 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.829868 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.829886 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.829895 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.931873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.931898 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.931907 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.931923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:58 crc kubenswrapper[4681]: I1123 06:44:58.931934 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:58Z","lastTransitionTime":"2025-11-23T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.034301 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.034339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.034348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.034364 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.034377 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.137190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.137228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.137237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.137252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.137262 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.239669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.239730 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.239743 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.239774 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.239788 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.342152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.342294 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.342387 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.342523 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.342613 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.444065 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/1.log" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.444203 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.444253 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.444266 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.444282 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.444293 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.444741 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/0.log" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.447408 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e" exitCode=1 Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.447469 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.447556 4681 scope.go:117] "RemoveContainer" containerID="fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.448037 4681 scope.go:117] "RemoveContainer" containerID="0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e" Nov 23 06:44:59 crc kubenswrapper[4681]: E1123 06:44:59.448211 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.458253 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.466100 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.476423 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.486338 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.494394 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.502625 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.509503 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.521649 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.530837 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.541246 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.546240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.546280 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.546292 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.546310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.546320 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.552938 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.562495 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.572153 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.587171 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:57Z\\\",\\\"message\\\":\\\" 5912 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:44:57.419973 5912 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:44:57.420002 5912 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:44:57.420022 5912 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:44:57.420061 5912 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 06:44:57.420082 5912 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1123 06:44:57.420125 5912 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 06:44:57.420156 5912 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:44:57.420178 5912 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:44:57.420196 5912 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 06:44:57.420244 5912 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 06:44:57.420272 5912 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:44:57.420312 5912 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:44:57.420348 5912 factory.go:656] Stopping watch factory\\\\nI1123 06:44:57.420372 5912 ovnkube.go:599] Stopped ovnkube\\\\nI1123 06:44:57.420385 5912 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.648637 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.648675 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.648685 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.648703 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.648712 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.750440 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.750503 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.750516 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.750531 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.750541 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.852719 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.852750 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.852769 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.852787 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.852796 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.955138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.955176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.955186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.955203 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:44:59 crc kubenswrapper[4681]: I1123 06:44:59.955217 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:44:59Z","lastTransitionTime":"2025-11-23T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.057363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.057401 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.057411 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.057428 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.057440 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.145639 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6"] Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.146092 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.148025 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.148198 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.157420 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.158655 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.158686 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.158696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.158712 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.158721 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.166804 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.174745 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.182368 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.188514 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.195595 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.203411 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.210992 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.219091 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.223763 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24nlt\" (UniqueName: \"kubernetes.io/projected/842356bd-1174-4109-a183-b368c16f3d08-kube-api-access-24nlt\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.223804 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/842356bd-1174-4109-a183-b368c16f3d08-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.223849 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/842356bd-1174-4109-a183-b368c16f3d08-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.223963 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/842356bd-1174-4109-a183-b368c16f3d08-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.228684 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.236427 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.244203 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.251683 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.251716 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.251682 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:00 crc kubenswrapper[4681]: E1123 06:45:00.251797 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:00 crc kubenswrapper[4681]: E1123 06:45:00.251880 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:00 crc kubenswrapper[4681]: E1123 06:45:00.252124 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.257868 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc99df85764a40b1acdbc62fe859e546753622fe62ec2c180d7829871f590cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:57Z\\\",\\\"message\\\":\\\" 5912 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:44:57.419973 5912 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:44:57.420002 5912 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:44:57.420022 5912 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:44:57.420061 5912 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 06:44:57.420082 5912 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1123 06:44:57.420125 5912 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 06:44:57.420156 5912 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:44:57.420178 5912 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:44:57.420196 5912 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 06:44:57.420244 5912 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 06:44:57.420272 5912 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:44:57.420312 5912 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:44:57.420348 5912 factory.go:656] Stopping watch factory\\\\nI1123 06:44:57.420372 5912 ovnkube.go:599] Stopped ovnkube\\\\nI1123 06:44:57.420385 5912 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.260994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.261030 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.261041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.261059 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.261087 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.267347 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.274646 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.324449 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/842356bd-1174-4109-a183-b368c16f3d08-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.324542 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/842356bd-1174-4109-a183-b368c16f3d08-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.324570 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24nlt\" (UniqueName: \"kubernetes.io/projected/842356bd-1174-4109-a183-b368c16f3d08-kube-api-access-24nlt\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.324595 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/842356bd-1174-4109-a183-b368c16f3d08-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.325109 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/842356bd-1174-4109-a183-b368c16f3d08-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.325292 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/842356bd-1174-4109-a183-b368c16f3d08-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.333179 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/842356bd-1174-4109-a183-b368c16f3d08-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.337093 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24nlt\" (UniqueName: \"kubernetes.io/projected/842356bd-1174-4109-a183-b368c16f3d08-kube-api-access-24nlt\") pod \"ovnkube-control-plane-749d76644c-jvlq6\" (UID: \"842356bd-1174-4109-a183-b368c16f3d08\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.364110 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.364137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.364147 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.364163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.364174 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.452615 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/1.log" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.456776 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.459804 4681 scope.go:117] "RemoveContainer" containerID="0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e" Nov 23 06:45:00 crc kubenswrapper[4681]: E1123 06:45:00.459947 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.466408 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.466438 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.466448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.466475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.466488 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: W1123 06:45:00.470594 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod842356bd_1174_4109_a183_b368c16f3d08.slice/crio-b1724b1b71999e9545a8db87b24d6e86d30d24877db18b783e68d7a2c5e4f6f2 WatchSource:0}: Error finding container b1724b1b71999e9545a8db87b24d6e86d30d24877db18b783e68d7a2c5e4f6f2: Status 404 returned error can't find the container with id b1724b1b71999e9545a8db87b24d6e86d30d24877db18b783e68d7a2c5e4f6f2 Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.470759 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.482491 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.490947 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.500021 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.509157 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.518273 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.525723 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.535532 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.545317 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.554206 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.562932 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.568640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.568664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.568675 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.568697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.568712 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.578841 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.604948 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.616130 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.647107 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.671192 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.671238 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.671249 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.671267 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.671277 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.773960 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.773990 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.773999 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.774015 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.774024 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.876021 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.876073 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.876084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.876101 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.876111 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.978280 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.978314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.978325 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.978340 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:00 crc kubenswrapper[4681]: I1123 06:45:00.978349 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:00Z","lastTransitionTime":"2025-11-23T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.080856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.080886 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.080894 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.080909 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.080921 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.182870 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.183101 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.183194 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.183268 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.183326 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.258792 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.285520 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.285559 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.285569 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.285582 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.285592 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.387343 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.387808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.387876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.387943 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.388003 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.463045 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" event={"ID":"842356bd-1174-4109-a183-b368c16f3d08","Type":"ContainerStarted","Data":"b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.463092 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" event={"ID":"842356bd-1174-4109-a183-b368c16f3d08","Type":"ContainerStarted","Data":"a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.463104 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" event={"ID":"842356bd-1174-4109-a183-b368c16f3d08","Type":"ContainerStarted","Data":"b1724b1b71999e9545a8db87b24d6e86d30d24877db18b783e68d7a2c5e4f6f2"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.463630 4681 scope.go:117] "RemoveContainer" containerID="0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e" Nov 23 06:45:01 crc kubenswrapper[4681]: E1123 06:45:01.463775 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.472500 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.480835 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.489526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.489630 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.489691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.489751 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.489808 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.493986 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.504418 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.512198 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.520188 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.533181 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kv72z"] Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.533178 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.533657 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:01 crc kubenswrapper[4681]: E1123 06:45:01.533726 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.543262 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.550280 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.559284 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.567701 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.575972 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.584663 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.591659 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.592279 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.592310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.592321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.592336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.592345 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.599995 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.609567 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.617725 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.625168 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.633611 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.635807 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.635852 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnhcp\" (UniqueName: \"kubernetes.io/projected/6eef1a94-78a8-4389-b1fe-2db3786ba043-kube-api-access-pnhcp\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.642069 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.650297 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.663272 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.671438 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.678403 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.684997 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.692783 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.694320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.694355 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.694366 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.694378 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.694389 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.700669 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.710210 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.718785 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.726521 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.734211 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.736493 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.736526 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnhcp\" (UniqueName: \"kubernetes.io/projected/6eef1a94-78a8-4389-b1fe-2db3786ba043-kube-api-access-pnhcp\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:01 crc kubenswrapper[4681]: E1123 06:45:01.736588 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:01 crc kubenswrapper[4681]: E1123 06:45:01.736645 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:02.236630199 +0000 UTC m=+39.306139436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.749688 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnhcp\" (UniqueName: \"kubernetes.io/projected/6eef1a94-78a8-4389-b1fe-2db3786ba043-kube-api-access-pnhcp\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.795815 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.795840 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.795851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.795863 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.795873 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.898117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.898144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.898154 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.898168 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:01 crc kubenswrapper[4681]: I1123 06:45:01.898176 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:01Z","lastTransitionTime":"2025-11-23T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.000398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.000669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.000694 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.000714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.000728 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.102397 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.102421 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.102430 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.102439 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.102449 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.204653 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.204696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.204708 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.204725 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.204735 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.241323 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:02 crc kubenswrapper[4681]: E1123 06:45:02.241450 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:02 crc kubenswrapper[4681]: E1123 06:45:02.241532 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:03.241516986 +0000 UTC m=+40.311026223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.251332 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.251340 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.251385 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:02 crc kubenswrapper[4681]: E1123 06:45:02.251487 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:02 crc kubenswrapper[4681]: E1123 06:45:02.251611 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:02 crc kubenswrapper[4681]: E1123 06:45:02.251671 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.310588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.310623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.310634 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.310651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.310664 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.412792 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.412815 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.412823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.412831 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.412839 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.514898 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.514933 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.514943 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.514955 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.514982 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.616551 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.616586 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.616596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.616609 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.616618 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.718619 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.718651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.718662 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.718673 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.718680 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.820483 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.820518 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.820526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.820541 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.820555 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.922269 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.922298 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.922307 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.922317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:02 crc kubenswrapper[4681]: I1123 06:45:02.922326 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:02Z","lastTransitionTime":"2025-11-23T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.024204 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.024230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.024240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.024256 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.024267 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.126410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.126443 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.126452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.126475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.126483 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.228250 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.228275 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.228285 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.228300 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.228309 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.250674 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:03 crc kubenswrapper[4681]: E1123 06:45:03.250905 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:03 crc kubenswrapper[4681]: E1123 06:45:03.251053 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:05.251022568 +0000 UTC m=+42.320531805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.250918 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:03 crc kubenswrapper[4681]: E1123 06:45:03.251291 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.260136 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.270791 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.280325 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.289784 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.298093 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.305114 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.318933 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.326643 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.329836 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.329864 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.329874 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.329887 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.329896 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.334197 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.342592 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.352411 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.360746 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.368852 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.381973 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.393450 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.406741 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.432330 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.432446 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.432533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.432626 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.432698 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.534313 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.534512 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.534521 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.534534 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.534543 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.636240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.636280 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.636290 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.636307 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.636320 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.737869 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.737920 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.737930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.737941 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.737952 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.839873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.839901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.839910 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.839922 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.839946 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.942401 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.942591 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.942659 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.942721 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:03 crc kubenswrapper[4681]: I1123 06:45:03.942783 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:03Z","lastTransitionTime":"2025-11-23T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.044349 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.044379 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.044428 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.044441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.044449 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.146288 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.146322 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.146332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.146347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.146356 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.248197 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.248223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.248232 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.248262 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.248272 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.251451 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.251481 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.251544 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:04 crc kubenswrapper[4681]: E1123 06:45:04.251590 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:04 crc kubenswrapper[4681]: E1123 06:45:04.251553 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:04 crc kubenswrapper[4681]: E1123 06:45:04.251670 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.350439 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.350491 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.350501 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.350516 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.350529 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.452846 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.452872 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.452885 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.452895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.452903 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.554652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.554690 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.554700 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.554712 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.554721 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.656832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.656949 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.657014 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.657093 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.657172 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.758682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.758712 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.758724 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.758738 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.758748 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.860952 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.861138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.861208 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.861270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.861326 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.963317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.963341 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.963350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.963360 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:04 crc kubenswrapper[4681]: I1123 06:45:04.963367 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:04Z","lastTransitionTime":"2025-11-23T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.065239 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.065268 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.065277 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.065287 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.065298 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.166711 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.166739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.166748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.166757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.166767 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.251623 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.251762 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.266524 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.266699 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.266765 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:09.266749766 +0000 UTC m=+46.336259002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.267734 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.267782 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.267795 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.267818 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.267829 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.312074 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.312100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.312109 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.312124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.312134 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.326231 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:05Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.328842 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.328892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.328903 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.328923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.328934 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.337803 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:05Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.340000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.340036 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.340045 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.340059 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.340068 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.347689 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:05Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.349668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.349696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.349705 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.349715 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.349722 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.357837 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:05Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.360380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.360410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.360422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.360435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.360444 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.368497 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:05Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:05 crc kubenswrapper[4681]: E1123 06:45:05.368597 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.369491 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.369510 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.369517 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.369528 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.369536 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.471042 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.471074 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.471092 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.471104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.471113 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.572880 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.572908 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.572916 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.572928 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.572936 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.674898 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.674934 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.674948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.674979 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.674989 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.776580 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.776609 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.776617 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.776628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.776639 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.878496 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.878529 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.878540 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.878552 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.878560 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.980503 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.980539 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.980549 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.980563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:05 crc kubenswrapper[4681]: I1123 06:45:05.980573 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:05Z","lastTransitionTime":"2025-11-23T06:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.082132 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.082166 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.082174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.082186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.082196 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.184205 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.184237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.184245 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.184254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.184263 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.251453 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.251493 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.251540 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:06 crc kubenswrapper[4681]: E1123 06:45:06.251576 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:06 crc kubenswrapper[4681]: E1123 06:45:06.251652 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:06 crc kubenswrapper[4681]: E1123 06:45:06.251732 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.286599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.286628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.286656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.286668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.286674 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.387941 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.387975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.387984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.388000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.388011 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.489664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.489693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.489703 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.489713 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.489723 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.591432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.591608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.591691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.591750 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.591801 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.693851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.693892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.693904 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.693918 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.693937 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.795805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.795832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.795840 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.795851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.795859 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.897289 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.897412 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.897506 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.897579 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:06 crc kubenswrapper[4681]: I1123 06:45:06.897642 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:06Z","lastTransitionTime":"2025-11-23T06:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.000891 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.000938 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.000948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.000964 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.000973 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.103320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.103359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.103369 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.103384 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.103393 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.205566 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.205600 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.205610 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.205621 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.205630 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.251182 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:07 crc kubenswrapper[4681]: E1123 06:45:07.251332 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.307069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.307099 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.307107 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.307124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.307134 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.408595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.408632 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.408641 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.408654 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.408664 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.510757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.510798 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.510810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.510824 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.510833 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.613052 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.613080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.613089 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.613100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.613108 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.714860 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.714895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.714904 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.714919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.714928 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.816512 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.816544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.816552 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.816562 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.816570 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.918592 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.918632 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.918642 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.918656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:07 crc kubenswrapper[4681]: I1123 06:45:07.918668 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:07Z","lastTransitionTime":"2025-11-23T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.020988 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.021045 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.021054 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.021065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.021087 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.123114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.123144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.123152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.123162 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.123169 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.225176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.225209 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.225218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.225228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.225241 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.251594 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.251643 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:08 crc kubenswrapper[4681]: E1123 06:45:08.251683 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:08 crc kubenswrapper[4681]: E1123 06:45:08.251721 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.251643 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:08 crc kubenswrapper[4681]: E1123 06:45:08.251788 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.326477 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.326499 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.326508 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.326518 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.326526 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.429125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.429158 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.429167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.429182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.429192 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.530925 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.530963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.530973 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.530987 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.531010 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.633040 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.633079 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.633089 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.633119 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.633128 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.735539 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.735583 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.735594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.735610 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.735620 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.837486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.837519 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.837528 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.837541 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.837550 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.939552 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.939587 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.939595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.939612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:08 crc kubenswrapper[4681]: I1123 06:45:08.939624 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:08Z","lastTransitionTime":"2025-11-23T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.042176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.042218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.042227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.042243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.042255 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.144838 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.144884 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.144894 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.144910 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.144919 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.247317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.247388 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.247398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.247408 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.247418 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.251795 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:09 crc kubenswrapper[4681]: E1123 06:45:09.251912 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.304366 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:09 crc kubenswrapper[4681]: E1123 06:45:09.304867 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:09 crc kubenswrapper[4681]: E1123 06:45:09.305261 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:17.305236926 +0000 UTC m=+54.374746163 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.350228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.350259 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.350269 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.350281 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.350289 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.452331 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.452374 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.452383 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.452398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.452408 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.554706 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.554740 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.554748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.554761 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.554772 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.657251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.657304 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.657314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.657327 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.657336 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.759875 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.759917 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.759930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.759946 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.759956 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.862326 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.862369 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.862382 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.862398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.862407 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.964688 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.964740 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.964748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.964764 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:09 crc kubenswrapper[4681]: I1123 06:45:09.964776 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:09Z","lastTransitionTime":"2025-11-23T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.066262 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.066293 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.066302 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.066313 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.066322 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.168064 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.168096 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.168106 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.168117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.168124 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.251242 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:10 crc kubenswrapper[4681]: E1123 06:45:10.251332 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.251431 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.251451 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:10 crc kubenswrapper[4681]: E1123 06:45:10.251780 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:10 crc kubenswrapper[4681]: E1123 06:45:10.251923 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.269926 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.269956 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.269967 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.269992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.270002 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.371685 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.371712 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.371719 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.371731 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.371740 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.473797 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.474098 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.474172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.474241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.474310 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.576772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.576813 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.576825 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.576840 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.576849 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.678954 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.678996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.679006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.679023 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.679033 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.780612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.780662 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.780671 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.780691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.780702 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.882786 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.882813 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.882822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.882834 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.882843 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.985316 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.985351 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.985359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.985375 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:10 crc kubenswrapper[4681]: I1123 06:45:10.985387 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:10Z","lastTransitionTime":"2025-11-23T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.087528 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.087563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.087573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.087586 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.087595 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.190026 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.190059 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.190067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.190078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.190086 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.251393 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:11 crc kubenswrapper[4681]: E1123 06:45:11.251558 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.292310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.292358 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.292366 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.292380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.292389 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.394674 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.394723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.394732 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.394746 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.394754 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.496868 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.497103 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.497178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.497251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.497319 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.600085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.600120 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.600132 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.600148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.600158 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.702533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.702559 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.702566 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.702577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.702585 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.805665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.805734 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.805749 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.805772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.805786 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.907512 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.907550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.907558 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.907573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:11 crc kubenswrapper[4681]: I1123 06:45:11.907583 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:11Z","lastTransitionTime":"2025-11-23T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.009339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.009361 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.009370 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.009379 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.009387 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.111240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.111289 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.111315 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.111332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.111346 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.213577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.213625 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.213633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.213659 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.213668 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.251617 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.251663 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.251720 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:12 crc kubenswrapper[4681]: E1123 06:45:12.251719 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:12 crc kubenswrapper[4681]: E1123 06:45:12.251809 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:12 crc kubenswrapper[4681]: E1123 06:45:12.251861 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.315148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.315270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.315445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.315628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.315777 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.418602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.418653 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.418665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.418682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.418691 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.520633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.520663 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.520672 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.520683 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.520711 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.622682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.622739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.622753 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.622771 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.622782 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.725097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.725141 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.725151 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.725167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.725177 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.827009 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.827057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.827068 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.827080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.827088 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.928624 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.928656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.928665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.928682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:12 crc kubenswrapper[4681]: I1123 06:45:12.928690 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:12Z","lastTransitionTime":"2025-11-23T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.030685 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.030716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.030725 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.030736 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.030762 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.132236 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.132287 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.132298 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.132314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.132324 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.234348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.234408 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.234419 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.234432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.234556 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.251155 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:13 crc kubenswrapper[4681]: E1123 06:45:13.251248 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.251979 4681 scope.go:117] "RemoveContainer" containerID="0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.262284 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.269953 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.277378 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.287664 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.298244 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.306291 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.316612 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.323823 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.331498 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.336277 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.336321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.336332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.336348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.336357 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.341980 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.349780 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.359668 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.370328 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.379184 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.387792 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.403560 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.438163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.438258 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.438318 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.438403 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.438503 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.499217 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/1.log" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.501157 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.501571 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.516125 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.530898 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.540419 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.540452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.540474 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.540491 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.540505 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.552807 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.561396 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.569164 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.578903 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.587665 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.596280 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.604580 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.611518 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.618522 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.625855 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.633583 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.640829 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.642245 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.642270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.642280 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.642295 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.642305 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.649948 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.659740 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.745003 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.745103 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.745180 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.745251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.745311 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.847772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.847808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.847817 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.847832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.847842 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.945345 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:45:13 crc kubenswrapper[4681]: E1123 06:45:13.945560 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:45:45.94553379 +0000 UTC m=+83.015043027 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.949519 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.949551 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.949560 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.949577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:13 crc kubenswrapper[4681]: I1123 06:45:13.949590 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:13Z","lastTransitionTime":"2025-11-23T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.046495 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046647 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046784 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046799 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.046815 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046840 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:46.04682709 +0000 UTC m=+83.116336328 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.046855 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.046879 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046887 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046921 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:46.046911429 +0000 UTC m=+83.116420676 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046947 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046988 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046994 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:46.046986169 +0000 UTC m=+83.116495416 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.046999 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.047008 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.047030 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:46.04702423 +0000 UTC m=+83.116533467 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.051737 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.051762 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.051772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.051785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.051794 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.153860 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.153894 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.153903 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.153921 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.153930 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.251298 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.251362 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.251415 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.251298 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.251488 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.251629 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.255903 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.255931 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.255940 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.255963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.255973 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.357731 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.357800 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.357809 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.357820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.357829 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.460263 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.460296 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.460304 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.460316 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.460325 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.504844 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/2.log" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.505477 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/1.log" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.508410 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e" exitCode=1 Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.508453 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.508518 4681 scope.go:117] "RemoveContainer" containerID="0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.509252 4681 scope.go:117] "RemoveContainer" containerID="10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e" Nov 23 06:45:14 crc kubenswrapper[4681]: E1123 06:45:14.509446 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.518562 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.526807 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.535528 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.545831 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.553717 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.562095 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.562125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.562135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.562152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.562161 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.562430 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.574613 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b60796d57f34f71a33d9365fac96136bfec611dc7675bb7dc779006eb60e74e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:44:59Z\\\",\\\"message\\\":\\\"ot add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:44:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:44:59.037639 6041 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:44:59.037639 6041 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.582271 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.588512 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.595495 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.602215 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.610317 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.618313 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.625718 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.633593 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.640410 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.664221 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.664254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.664263 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.664276 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.664285 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.765826 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.765853 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.765863 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.765876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.765884 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.868145 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.868175 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.868183 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.868196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.868204 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.970194 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.970230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.970239 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.970253 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:14 crc kubenswrapper[4681]: I1123 06:45:14.970264 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:14Z","lastTransitionTime":"2025-11-23T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.072100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.072220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.072288 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.072361 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.072431 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.174374 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.174401 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.174409 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.174418 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.174425 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.251841 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.252002 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.275891 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.275929 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.275950 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.275961 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.275970 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.378012 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.378035 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.378044 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.378053 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.378060 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.479874 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.479898 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.479948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.479960 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.479968 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.512393 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/2.log" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.515225 4681 scope.go:117] "RemoveContainer" containerID="10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.515369 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.525542 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.534806 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.547058 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.555679 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.564887 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.573325 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.582116 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.582157 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.582167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.582179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.582187 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.582377 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.589833 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.598119 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.604858 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.606851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.606952 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.607012 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.607080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.607138 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.611795 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.614975 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.617240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.617331 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.617388 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.617448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.617519 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.618744 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.625677 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.627197 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.628145 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.628173 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.628182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.628197 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.628206 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.634975 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.636365 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.638625 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.638651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.638660 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.638672 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.638679 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.643294 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.647037 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.649644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.649671 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.649681 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.649692 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.649702 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.655175 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.657129 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:15 crc kubenswrapper[4681]: E1123 06:45:15.657231 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.683534 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.683555 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.683564 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.683574 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.683582 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.784525 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.784550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.784557 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.784567 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.784576 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.886454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.886530 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.886553 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.886567 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.886575 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.987913 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.988009 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.988086 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.988155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:15 crc kubenswrapper[4681]: I1123 06:45:15.988207 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:15Z","lastTransitionTime":"2025-11-23T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.067712 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.075814 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.077884 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.088585 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.089386 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.089404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.089413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.089424 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.089433 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.096947 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.105984 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.132014 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.147143 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.162276 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.174808 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.182672 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.191389 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.191543 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.191613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.191696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.191751 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.197439 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.205024 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.212405 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.219554 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.228420 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.237838 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.245572 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.250808 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.250857 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.250817 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:16 crc kubenswrapper[4681]: E1123 06:45:16.250918 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:16 crc kubenswrapper[4681]: E1123 06:45:16.251032 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:16 crc kubenswrapper[4681]: E1123 06:45:16.251126 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.293768 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.293796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.293806 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.293817 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.293824 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.395599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.395621 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.395628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.395637 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.395647 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.497273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.497328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.497335 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.497348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.497357 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.598913 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.598947 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.598956 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.598966 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.598974 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.700751 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.700846 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.700858 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.700870 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.700878 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.802826 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.802868 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.802876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.802892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.802901 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.904758 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.904780 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.904788 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.904801 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:16 crc kubenswrapper[4681]: I1123 06:45:16.904808 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:16Z","lastTransitionTime":"2025-11-23T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.006223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.006251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.006262 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.006276 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.006285 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.108407 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.108435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.108443 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.108453 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.108480 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.210102 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.210129 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.210138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.210165 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.210175 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.251658 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:17 crc kubenswrapper[4681]: E1123 06:45:17.251773 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.312068 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.312116 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.312128 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.312138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.312147 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.374029 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:17 crc kubenswrapper[4681]: E1123 06:45:17.374158 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:17 crc kubenswrapper[4681]: E1123 06:45:17.374213 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:45:33.37420082 +0000 UTC m=+70.443710057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.414088 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.414118 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.414126 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.414137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.414144 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.515876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.515930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.515939 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.515956 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.515965 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.617941 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.617971 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.617981 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.617996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.618005 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.720261 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.720310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.720320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.720339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.720348 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.822281 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.822312 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.822323 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.822338 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.822347 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.924539 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.924578 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.924588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.924600 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:17 crc kubenswrapper[4681]: I1123 06:45:17.924608 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:17Z","lastTransitionTime":"2025-11-23T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.026339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.026367 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.026376 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.026389 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.026398 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.128218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.128255 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.128265 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.128281 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.128290 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.229520 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.229565 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.229577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.229588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.229595 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.251229 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.251248 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.251248 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:18 crc kubenswrapper[4681]: E1123 06:45:18.251327 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:18 crc kubenswrapper[4681]: E1123 06:45:18.251419 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:18 crc kubenswrapper[4681]: E1123 06:45:18.251470 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.332001 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.332022 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.332032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.332042 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.332073 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.433853 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.433887 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.433895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.433910 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.433936 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.535595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.535620 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.535629 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.535638 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.535644 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.637227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.637305 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.637314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.637350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.637367 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.739767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.739791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.739799 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.739808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.739818 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.841375 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.841411 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.841447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.841485 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.841495 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.942885 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.942968 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.942980 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.942991 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:18 crc kubenswrapper[4681]: I1123 06:45:18.943000 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:18Z","lastTransitionTime":"2025-11-23T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.044290 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.044343 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.044352 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.044364 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.044373 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.146061 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.146088 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.146096 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.146108 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.146117 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.248155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.248180 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.248188 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.248201 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.248212 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.252698 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:19 crc kubenswrapper[4681]: E1123 06:45:19.252803 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.350548 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.350577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.350587 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.350599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.350607 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.452058 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.452084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.452097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.452107 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.452115 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.553974 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.554006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.554017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.554031 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.554040 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.656027 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.656052 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.656060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.656069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.656078 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.757937 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.757964 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.757971 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.757982 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.757989 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.860137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.860159 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.860166 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.860177 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.860185 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.961601 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.961627 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.961634 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.961645 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:19 crc kubenswrapper[4681]: I1123 06:45:19.961652 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:19Z","lastTransitionTime":"2025-11-23T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.063435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.063485 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.063494 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.063509 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.063518 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.164611 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.164648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.164657 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.164668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.164676 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.251178 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:20 crc kubenswrapper[4681]: E1123 06:45:20.251266 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.251287 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:20 crc kubenswrapper[4681]: E1123 06:45:20.251376 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.251480 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:20 crc kubenswrapper[4681]: E1123 06:45:20.251618 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.265793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.265821 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.265829 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.265840 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.265847 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.367200 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.367237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.367246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.367256 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.367266 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.469241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.469264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.469272 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.469280 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.469287 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.571147 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.571204 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.571217 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.571234 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.571245 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.673092 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.673146 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.673156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.673168 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.673177 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.774277 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.774397 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.774489 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.774568 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.774630 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.875670 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.875690 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.875698 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.875707 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.875714 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.977605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.977644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.977654 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.977665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:20 crc kubenswrapper[4681]: I1123 06:45:20.977674 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:20Z","lastTransitionTime":"2025-11-23T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.079310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.079332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.079340 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.079348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.079356 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.181060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.181160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.181229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.181295 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.181346 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.251046 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:21 crc kubenswrapper[4681]: E1123 06:45:21.251646 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.283077 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.283111 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.283123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.283135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.283143 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.384781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.384823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.384831 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.384842 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.384851 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.486403 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.486432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.486441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.486450 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.486474 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.588422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.588606 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.588674 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.588747 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.588803 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.690229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.690377 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.690444 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.690529 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.690582 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.792771 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.792797 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.792807 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.792820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.792827 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.894450 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.894625 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.894687 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.894757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.894811 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.997043 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.997069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.997077 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.997089 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:21 crc kubenswrapper[4681]: I1123 06:45:21.997097 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:21Z","lastTransitionTime":"2025-11-23T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.098713 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.098745 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.098756 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.098767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.098776 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.200659 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.200709 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.200720 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.200731 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.200740 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.251605 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.251635 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:22 crc kubenswrapper[4681]: E1123 06:45:22.251709 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.251604 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:22 crc kubenswrapper[4681]: E1123 06:45:22.251803 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:22 crc kubenswrapper[4681]: E1123 06:45:22.251894 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.302896 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.303019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.303078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.303142 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.303201 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.404876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.405219 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.405279 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.405332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.405380 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.507403 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.507535 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.507595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.507667 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.507720 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.609596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.609630 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.609644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.609657 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.609665 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.710864 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.710967 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.711042 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.711118 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.711179 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.813275 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.813490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.813562 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.813635 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.813690 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.916240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.916516 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.916604 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.916665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:22 crc kubenswrapper[4681]: I1123 06:45:22.916739 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:22Z","lastTransitionTime":"2025-11-23T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.019649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.019681 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.019692 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.019714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.019725 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.122247 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.122280 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.122289 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.122302 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.122311 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.223867 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.224130 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.224190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.224255 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.224314 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.251510 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:23 crc kubenswrapper[4681]: E1123 06:45:23.251618 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.261261 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.268868 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.275141 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.282395 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.289480 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.298035 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.306235 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.318277 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.326319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.326346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.326356 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.326369 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.326378 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.327491 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.334161 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.342353 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.349193 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.357800 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.367122 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.374949 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.383314 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.394943 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.427551 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.427779 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.427787 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.427798 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.427808 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.532322 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.532379 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.532395 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.532422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.532438 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.636280 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.636556 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.636624 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.636699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.636761 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.738820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.738854 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.738862 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.738895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.738909 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.840608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.840832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.840911 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.841006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.841071 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.944438 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.944743 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.944805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.944905 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:23 crc kubenswrapper[4681]: I1123 06:45:23.944964 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:23Z","lastTransitionTime":"2025-11-23T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.047237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.047267 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.047275 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.047290 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.047299 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.149513 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.149547 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.149555 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.149572 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.149609 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.250992 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.251006 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:24 crc kubenswrapper[4681]: E1123 06:45:24.251386 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:24 crc kubenswrapper[4681]: E1123 06:45:24.251386 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.251029 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:24 crc kubenswrapper[4681]: E1123 06:45:24.251473 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.251853 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.251952 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.252011 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.252085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.252144 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.355032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.355121 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.355190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.355260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.355319 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.456760 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.456787 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.456796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.456805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.456813 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.558927 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.559152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.559220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.559282 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.559349 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.661652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.661693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.661702 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.661717 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.661726 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.763949 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.763997 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.764013 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.764033 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.764044 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.866376 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.866716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.866785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.866855 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.866923 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.969024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.969059 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.969067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.969083 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:24 crc kubenswrapper[4681]: I1123 06:45:24.969092 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:24Z","lastTransitionTime":"2025-11-23T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.070816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.070943 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.071012 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.071088 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.071149 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.173572 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.173600 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.173611 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.173622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.173629 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.251775 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:25 crc kubenswrapper[4681]: E1123 06:45:25.251962 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.275227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.275256 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.275267 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.275299 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.275311 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.376825 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.376865 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.376874 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.376886 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.376900 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.478631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.478663 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.478674 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.478689 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.478698 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.580953 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.581189 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.581246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.581320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.581376 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.683309 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.683509 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.683594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.683671 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.683755 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.785312 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.785347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.785357 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.785414 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.785426 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.888037 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.888163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.888221 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.888283 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.888334 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.989904 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.989974 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.989984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.990017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:25 crc kubenswrapper[4681]: I1123 06:45:25.990026 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:25Z","lastTransitionTime":"2025-11-23T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.010636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.010657 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.010665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.010676 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.010684 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.019240 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.021362 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.021386 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.021394 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.021404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.021411 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.028677 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.030524 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.030546 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.030553 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.030562 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.030569 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.037780 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.039905 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.040000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.040074 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.040135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.040190 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.048018 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.050113 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.050151 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.050160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.050169 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.050176 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.058123 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.058235 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.091648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.091673 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.091682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.091693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.091701 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.193129 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.193156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.193165 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.193176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.193183 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.251586 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.251586 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.251597 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.251686 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.251765 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:26 crc kubenswrapper[4681]: E1123 06:45:26.251827 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.294961 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.294995 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.295003 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.295016 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.295026 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.397167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.397190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.397198 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.397207 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.397215 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.499061 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.499092 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.499100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.499113 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.499123 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.600421 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.600490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.600500 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.600511 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.600520 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.702485 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.702524 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.702533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.702548 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.702560 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.804835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.804976 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.805036 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.805102 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.805154 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.906814 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.906973 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.907061 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.907136 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:26 crc kubenswrapper[4681]: I1123 06:45:26.907198 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:26Z","lastTransitionTime":"2025-11-23T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.008398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.008520 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.008589 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.008658 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.008719 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.110780 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.110799 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.110808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.110816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.110825 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.212271 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.212380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.212447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.212533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.212609 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.251854 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:27 crc kubenswrapper[4681]: E1123 06:45:27.251962 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.313849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.313887 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.313896 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.313912 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.313923 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.415352 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.415378 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.415386 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.415396 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.415404 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.517350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.517391 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.517401 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.517415 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.517426 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.619783 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.619808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.619816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.619826 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.619833 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.721337 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.721363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.721372 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.721385 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.721392 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.822983 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.823000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.823008 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.823017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.823024 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.924960 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.924992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.925001 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.925013 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:27 crc kubenswrapper[4681]: I1123 06:45:27.925021 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:27Z","lastTransitionTime":"2025-11-23T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.026602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.026645 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.026656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.026666 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.026675 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.128187 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.128211 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.128220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.128229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.128236 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.229492 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.229526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.229535 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.229547 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.229573 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.251020 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.251025 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.251041 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:28 crc kubenswrapper[4681]: E1123 06:45:28.251295 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:28 crc kubenswrapper[4681]: E1123 06:45:28.251392 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.251484 4681 scope.go:117] "RemoveContainer" containerID="10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e" Nov 23 06:45:28 crc kubenswrapper[4681]: E1123 06:45:28.251500 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:28 crc kubenswrapper[4681]: E1123 06:45:28.251627 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.331615 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.331644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.331656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.331667 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.331677 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.433495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.433517 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.433528 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.433537 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.433547 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.534993 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.535024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.535034 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.535044 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.535052 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.636288 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.636336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.636346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.636356 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.636364 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.738112 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.738146 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.738156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.738186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.738210 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.839353 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.839452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.839613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.839707 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.839793 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.941663 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.941688 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.941697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.941709 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:28 crc kubenswrapper[4681]: I1123 06:45:28.941718 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:28Z","lastTransitionTime":"2025-11-23T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.042866 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.042899 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.042908 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.042922 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.042934 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.144666 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.144692 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.144716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.144730 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.144738 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.246622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.246722 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.246785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.246856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.246923 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.250816 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:29 crc kubenswrapper[4681]: E1123 06:45:29.250924 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.348250 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.348377 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.348441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.348515 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.348565 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.450556 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.450661 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.450719 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.450789 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.450854 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.552507 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.552573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.552584 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.552599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.552612 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.654131 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.654162 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.654173 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.654200 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.654214 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.760487 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.760612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.760699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.760761 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.760831 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.862544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.862574 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.862582 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.862595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.862605 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.964093 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.964132 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.964141 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.964156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:29 crc kubenswrapper[4681]: I1123 06:45:29.964169 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:29Z","lastTransitionTime":"2025-11-23T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.065999 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.066018 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.066027 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.066038 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.066047 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.167402 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.167434 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.167445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.167490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.167502 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.251107 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.251130 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.251130 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:30 crc kubenswrapper[4681]: E1123 06:45:30.251225 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:30 crc kubenswrapper[4681]: E1123 06:45:30.251311 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:30 crc kubenswrapper[4681]: E1123 06:45:30.251388 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.269309 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.269346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.269359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.269374 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.269391 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.371112 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.371141 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.371150 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.371160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.371167 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.473222 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.473262 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.473272 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.473289 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.473300 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.575517 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.575576 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.575585 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.575602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.575610 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.677363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.677415 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.677428 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.677447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.677478 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.778996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.779026 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.779035 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.779047 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.779058 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.880973 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.880996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.881005 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.881016 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.881023 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.982264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.982296 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.982306 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.982320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:30 crc kubenswrapper[4681]: I1123 06:45:30.982329 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:30Z","lastTransitionTime":"2025-11-23T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.083883 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.083906 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.083914 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.083923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.083932 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.185341 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.185381 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.185389 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.185399 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.185406 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.251652 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:31 crc kubenswrapper[4681]: E1123 06:45:31.251744 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.286651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.286677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.286686 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.286697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.286705 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.387674 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.387695 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.387703 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.387712 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.387720 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.489577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.489600 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.489608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.489651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.489661 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.591605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.591633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.591642 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.591654 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.591669 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.692743 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.692796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.692817 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.692829 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.692843 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.794299 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.794339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.794350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.794363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.794374 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.895983 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.896010 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.896019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.896030 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.896038 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.997870 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.997915 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.997924 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.997942 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:31 crc kubenswrapper[4681]: I1123 06:45:31.997952 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:31Z","lastTransitionTime":"2025-11-23T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.099246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.099286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.099296 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.099307 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.099314 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.201434 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.201487 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.201497 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.201521 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.201532 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.250745 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.250812 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:32 crc kubenswrapper[4681]: E1123 06:45:32.250850 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.250745 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:32 crc kubenswrapper[4681]: E1123 06:45:32.250920 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:32 crc kubenswrapper[4681]: E1123 06:45:32.250985 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.303147 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.303169 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.303179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.303190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.303199 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.404263 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.404304 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.404319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.404335 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.404346 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.505984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.506084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.506153 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.506220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.506283 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.607943 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.607975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.607984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.608015 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.608027 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.709784 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.709825 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.709835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.709849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.709859 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.811024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.811055 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.811065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.811081 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.811093 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.913039 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.913137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.913212 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.913271 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:32 crc kubenswrapper[4681]: I1123 06:45:32.913331 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:32Z","lastTransitionTime":"2025-11-23T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.014900 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.014997 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.015063 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.015123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.015175 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.116997 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.117027 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.117046 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.117060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.117069 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.218714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.218750 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.218760 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.218782 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.218793 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.251207 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:33 crc kubenswrapper[4681]: E1123 06:45:33.251299 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.260786 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.270550 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.279533 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.288241 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.296304 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.302775 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.309654 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.318836 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.319911 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.319935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.319945 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.319959 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.319968 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.326549 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.334114 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.342866 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.351133 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.358926 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.370443 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.378020 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.384968 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.391298 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.400667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:33 crc kubenswrapper[4681]: E1123 06:45:33.400793 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:33 crc kubenswrapper[4681]: E1123 06:45:33.400881 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:05.400862668 +0000 UTC m=+102.470371915 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.421939 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.421970 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.421978 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.422008 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.422017 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.523149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.523179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.523187 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.523198 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.523207 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.625202 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.625227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.625236 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.625247 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.625255 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.726949 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.727066 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.727122 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.727174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.727220 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.828744 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.828887 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.828952 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.829023 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.829083 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.931291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.931321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.931330 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.931341 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:33 crc kubenswrapper[4681]: I1123 06:45:33.931350 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:33Z","lastTransitionTime":"2025-11-23T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.033493 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.033584 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.033639 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.033710 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.033761 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.135320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.135473 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.135545 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.135607 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.135674 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.237685 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.237717 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.237728 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.237742 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.237755 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.251055 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.251091 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:34 crc kubenswrapper[4681]: E1123 06:45:34.251153 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.251286 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:34 crc kubenswrapper[4681]: E1123 06:45:34.251414 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:34 crc kubenswrapper[4681]: E1123 06:45:34.251607 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.339581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.339612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.339621 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.339631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.339640 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.441548 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.441674 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.441737 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.441795 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.441862 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.542987 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.543007 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.543016 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.543025 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.543032 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.644615 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.644643 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.644651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.644661 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.644669 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.746326 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.746353 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.746363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.746373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.746384 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.848240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.848255 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.848263 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.848272 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.848281 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.949294 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.949321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.949332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.949344 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:34 crc kubenswrapper[4681]: I1123 06:45:34.949355 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:34Z","lastTransitionTime":"2025-11-23T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.051089 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.051117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.051144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.051154 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.051161 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.152552 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.152576 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.152586 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.152597 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.152605 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.251619 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:35 crc kubenswrapper[4681]: E1123 06:45:35.251720 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.254055 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.254074 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.254084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.254094 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.254101 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.355531 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.355556 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.355565 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.355575 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.355583 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.457336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.457521 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.457596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.457652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.457709 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.559183 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.559220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.559228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.559244 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.559254 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.561190 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/0.log" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.561226 4681 generic.go:334] "Generic (PLEG): container finished" podID="4094b291-8b0b-43c0-96e9-f08a9ef53c8b" containerID="c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066" exitCode=1 Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.561249 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerDied","Data":"c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.561544 4681 scope.go:117] "RemoveContainer" containerID="c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.570515 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.579860 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.587917 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.596655 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.604681 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.613452 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.621821 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.632097 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.639980 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.646910 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.655899 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.660945 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.660974 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.660982 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.660995 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.661004 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.666180 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.674445 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.687232 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.695702 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.705156 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.712185 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.762879 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.762917 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.762927 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.762945 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.762955 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.864575 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.864605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.864613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.864625 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.864635 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.966041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.966069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.966078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.966090 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:35 crc kubenswrapper[4681]: I1123 06:45:35.966097 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:35Z","lastTransitionTime":"2025-11-23T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.067212 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.067233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.067243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.067254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.067262 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.168262 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.168421 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.168590 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.168727 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.168864 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.251221 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.251222 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.251421 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.251867 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.251662 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.251614 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.270870 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.270892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.270902 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.270913 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.270923 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.271438 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.271522 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.271582 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.271643 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.271698 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.280723 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.282957 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.282994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.283003 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.283019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.283029 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.291584 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.293473 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.293496 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.293507 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.293519 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.293527 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.300493 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.302218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.302243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.302252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.302262 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.302270 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.309370 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.311037 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.311057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.311065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.311074 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.311081 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.318302 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: E1123 06:45:36.318437 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.374859 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.374891 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.374901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.374912 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.374923 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.476203 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.476228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.476236 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.476246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.476282 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.564546 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/0.log" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.564579 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerStarted","Data":"85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.573380 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.578040 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.578060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.578067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.578076 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.578085 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.581014 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.586840 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.592563 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.599116 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.605836 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.614961 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.623979 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.632250 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.639318 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.649054 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.656864 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.663479 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.671266 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.680346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.680371 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.680380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.680392 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.680401 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.681329 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.690177 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.702837 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:36Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.782580 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.782602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.782611 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.782622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.782630 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.884649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.884758 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.884835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.884920 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.884984 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.987141 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.987167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.987177 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.987190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:36 crc kubenswrapper[4681]: I1123 06:45:36.987199 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:36Z","lastTransitionTime":"2025-11-23T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.089279 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.089302 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.089310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.089321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.089329 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.191620 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.191648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.191656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.191669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.191678 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.251568 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:37 crc kubenswrapper[4681]: E1123 06:45:37.251701 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.259946 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.293119 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.293166 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.293176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.293189 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.293198 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.395399 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.395423 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.395430 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.395455 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.395487 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.497221 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.497247 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.497255 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.497267 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.497290 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.599042 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.599116 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.599125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.599134 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.599142 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.701151 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.701181 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.701235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.701250 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.701258 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.803121 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.803143 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.803154 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.803164 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.803195 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.905305 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.905327 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.905337 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.905347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:37 crc kubenswrapper[4681]: I1123 06:45:37.905370 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:37Z","lastTransitionTime":"2025-11-23T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.007273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.007321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.007331 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.007343 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.007350 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.109391 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.109430 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.109439 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.109475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.109487 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.211056 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.211092 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.211101 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.211114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.211124 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.251071 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.251145 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.251291 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:38 crc kubenswrapper[4681]: E1123 06:45:38.251515 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:38 crc kubenswrapper[4681]: E1123 06:45:38.251632 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:38 crc kubenswrapper[4681]: E1123 06:45:38.251754 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.313645 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.313693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.313703 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.313714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.313724 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.416397 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.416427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.416435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.416448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.416482 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.518045 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.518071 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.518080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.518093 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.518102 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.619868 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.620160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.620235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.620307 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.620377 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.722610 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.722648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.722658 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.722670 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.722679 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.824392 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.824415 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.824423 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.824433 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.824482 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.926167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.926239 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.926276 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.926286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:38 crc kubenswrapper[4681]: I1123 06:45:38.926295 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:38Z","lastTransitionTime":"2025-11-23T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.028996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.029030 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.029039 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.029053 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.029062 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.130753 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.130794 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.130807 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.130838 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.130849 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.232648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.232816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.232905 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.232973 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.233035 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.251287 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:39 crc kubenswrapper[4681]: E1123 06:45:39.251394 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.334723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.334772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.334786 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.334802 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.334813 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.436202 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.436225 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.436257 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.436268 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.436275 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.538230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.538264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.538273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.538287 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.538298 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.640543 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.640570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.640579 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.640607 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.640615 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.742602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.742633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.742641 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.742655 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.742664 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.844291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.844312 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.844320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.844329 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.844338 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.946144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.946172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.946180 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.946212 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:39 crc kubenswrapper[4681]: I1123 06:45:39.946220 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:39Z","lastTransitionTime":"2025-11-23T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.047488 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.047515 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.047525 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.047534 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.047542 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.149033 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.149087 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.149101 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.149122 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.149138 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250773 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250801 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250838 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250890 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250898 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.250907 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:40 crc kubenswrapper[4681]: E1123 06:45:40.250976 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:40 crc kubenswrapper[4681]: E1123 06:45:40.251080 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:40 crc kubenswrapper[4681]: E1123 06:45:40.251162 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.352486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.352587 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.352598 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.352608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.352618 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.454196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.454239 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.454248 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.454265 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.454275 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.556377 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.556407 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.556415 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.556429 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.556437 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.658131 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.658165 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.658175 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.658186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.658196 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.759751 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.759782 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.759790 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.759800 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.759807 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.861513 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.861554 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.861565 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.861581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.861592 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.963307 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.963332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.963339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.963349 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:40 crc kubenswrapper[4681]: I1123 06:45:40.963358 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:40Z","lastTransitionTime":"2025-11-23T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.065200 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.065234 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.065243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.065254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.065262 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.166725 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.166759 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.166767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.166777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.166785 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.251784 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:41 crc kubenswrapper[4681]: E1123 06:45:41.251905 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.269152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.269180 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.269188 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.269197 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.269206 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.371099 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.371132 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.371140 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.371149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.371157 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.472873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.472898 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.472909 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.472919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.472927 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.575094 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.575117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.575124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.575134 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.575142 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.677120 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.677155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.677164 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.677179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.677192 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.779206 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.779246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.779257 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.779272 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.779281 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.881033 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.881071 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.881079 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.881096 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.881106 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.985155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.985188 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.985198 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.985209 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:41 crc kubenswrapper[4681]: I1123 06:45:41.985217 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:41Z","lastTransitionTime":"2025-11-23T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.087122 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.087152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.087160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.087172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.087180 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.189165 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.189194 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.189202 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.189211 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.189218 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.251880 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.251902 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.252052 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:42 crc kubenswrapper[4681]: E1123 06:45:42.252138 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:42 crc kubenswrapper[4681]: E1123 06:45:42.252280 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.252292 4681 scope.go:117] "RemoveContainer" containerID="10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e" Nov 23 06:45:42 crc kubenswrapper[4681]: E1123 06:45:42.252340 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.291447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.291512 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.291524 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.291537 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.291544 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.393111 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.393292 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.393300 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.393314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.393322 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.495368 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.495410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.495419 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.495437 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.495447 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.580069 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/2.log" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.582151 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.582527 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.592733 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.596860 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.596895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.596905 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.596920 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.596929 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.606908 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.617529 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.625206 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.632792 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.641110 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.649640 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.657715 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.664995 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.674516 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.686512 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.696228 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.698437 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.698480 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.698490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.698504 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.698513 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.705220 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.713125 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.721542 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.731803 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.739010 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1658272b-fc8f-4c75-8537-6e1b863b0f82\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d803964c3c48bbbb674ce8c9ff214415b7f3cb5f545daf2dbe6463c9191e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.747146 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.801191 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.801223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.801231 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.801246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.801255 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.904253 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.904290 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.904301 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.904316 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:42 crc kubenswrapper[4681]: I1123 06:45:42.904328 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:42Z","lastTransitionTime":"2025-11-23T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.006185 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.006215 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.006224 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.006251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.006260 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.108394 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.108422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.108430 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.108443 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.108451 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.210173 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.210207 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.210215 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.210227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.210236 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.251572 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:43 crc kubenswrapper[4681]: E1123 06:45:43.251679 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.261389 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.270224 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.277212 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.286455 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.292822 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.300412 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.306976 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.311777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.311802 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.311811 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.311822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.311830 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.318656 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.327955 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.335433 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.346285 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.356486 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.363176 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1658272b-fc8f-4c75-8537-6e1b863b0f82\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d803964c3c48bbbb674ce8c9ff214415b7f3cb5f545daf2dbe6463c9191e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.370783 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.377909 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.389414 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.397001 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.404490 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.413599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.413628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.413637 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.413649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.413657 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.515552 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.515588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.515598 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.515614 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.515625 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.586429 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/3.log" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.586960 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/2.log" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.589102 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" exitCode=1 Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.589140 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.589188 4681 scope.go:117] "RemoveContainer" containerID="10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.589666 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:45:43 crc kubenswrapper[4681]: E1123 06:45:43.589832 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.598861 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.608142 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.617363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.617389 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.617398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.617411 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.617420 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.617592 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.624332 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.633868 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.642412 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.650858 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.658977 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.668936 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.676007 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1658272b-fc8f-4c75-8537-6e1b863b0f82\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d803964c3c48bbbb674ce8c9ff214415b7f3cb5f545daf2dbe6463c9191e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.683703 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.691023 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.703149 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bb81ddcec9ee17f50d5acae6e282ca44420543fc8ea84ae1ced5c491e1dd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:13Z\\\",\\\"message\\\":\\\"7594bb65-e742-44b3-a975-d639b1128be5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:45:13.860180 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860186 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-wh4gt\\\\nI1123 06:45:13.860184 6266 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1123 06:45:13.860192 6266 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-wh4gt in node crc\\\\nI1123 06:45:13.860195 6266 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 1.532753ms\\\\nI1123 06:45:13.860202 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 06:45:13.860206 6266 services_controller.go:356] Processing sync for service openshift-dns/dns-default for network=default\\\\nI1123 06:45:13.860212 6266 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1123 06:45:13.860218 6266 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:42Z\\\",\\\"message\\\":\\\"ice\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}\\\\nI1123 06:45:42.859681 6673 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 2.559919ms\\\\nI1123 06:45:42.859693 6673 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nI1123 06:45:42.859720 6673 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:45:42.859598 6673 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.711346 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.719800 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.719866 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.719881 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.719902 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.719919 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.720787 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.729254 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.736944 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.743632 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.821175 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.821199 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.821209 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.821222 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.821232 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.923585 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.923639 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.923651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.923660 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:43 crc kubenswrapper[4681]: I1123 06:45:43.923668 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:43Z","lastTransitionTime":"2025-11-23T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.025527 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.025673 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.025777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.025886 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.025974 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.127793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.127820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.127828 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.127845 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.127854 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.229662 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.229689 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.229699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.229713 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.229722 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.251289 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.251322 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.251479 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:44 crc kubenswrapper[4681]: E1123 06:45:44.251596 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:44 crc kubenswrapper[4681]: E1123 06:45:44.251687 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:44 crc kubenswrapper[4681]: E1123 06:45:44.251763 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.331821 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.331859 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.331868 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.331880 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.331888 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.433668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.433693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.433702 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.433711 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.433718 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.535894 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.535922 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.535931 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.535942 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.535952 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.592393 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/3.log" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.594890 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:45:44 crc kubenswrapper[4681]: E1123 06:45:44.595016 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.605325 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.614897 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.624183 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.632540 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.637331 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.637454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.637535 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.637636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.637716 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.639610 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.646798 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.657725 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.665017 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1658272b-fc8f-4c75-8537-6e1b863b0f82\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d803964c3c48bbbb674ce8c9ff214415b7f3cb5f545daf2dbe6463c9191e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.672614 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.680232 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.688598 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.698643 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.706762 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.714553 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.726516 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:42Z\\\",\\\"message\\\":\\\"ice\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}\\\\nI1123 06:45:42.859681 6673 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 2.559919ms\\\\nI1123 06:45:42.859693 6673 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nI1123 06:45:42.859720 6673 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:45:42.859598 6673 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.734267 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.739728 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.739750 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.739758 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.739771 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.739780 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.743140 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.749925 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.841696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.841735 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.841748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.841762 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.841774 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.944059 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.944105 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.944114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.944129 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:44 crc kubenswrapper[4681]: I1123 06:45:44.944138 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:44Z","lastTransitionTime":"2025-11-23T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.051798 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.051824 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.051835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.051857 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.051865 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.154182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.154215 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.154224 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.154235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.154244 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.251869 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:45 crc kubenswrapper[4681]: E1123 06:45:45.251994 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.255602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.255630 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.255639 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.255677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.255686 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.357549 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.357604 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.357614 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.357625 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.357632 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.459115 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.459148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.459156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.459170 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.459179 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.561088 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.561117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.561125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.561135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.561142 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.662522 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.662549 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.662558 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.662567 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.662576 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.764475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.764499 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.764507 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.764518 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.764527 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.866544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.866573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.866582 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.866593 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.866601 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.968296 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.968332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.968341 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.968355 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:45 crc kubenswrapper[4681]: I1123 06:45:45.968364 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:45Z","lastTransitionTime":"2025-11-23T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.002526 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.002684 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:50.002663826 +0000 UTC m=+147.072173073 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.070375 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.070413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.070421 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.070435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.070446 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.103623 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.103652 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.103672 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.103692 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103749 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103764 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103789 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:50.103778423 +0000 UTC m=+147.173287660 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103819 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:50.103795906 +0000 UTC m=+147.173305143 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103817 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103840 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103859 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103874 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103893 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103903 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103881 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:50.103875816 +0000 UTC m=+147.173385054 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.103943 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:50.103932604 +0000 UTC m=+147.173441851 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.172496 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.172523 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.172531 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.172564 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.172572 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.251544 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.251607 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.251660 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.251668 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.251740 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.251902 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.274169 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.274206 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.274217 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.274231 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.274240 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.375877 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.375926 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.375936 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.375953 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.375961 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.477547 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.477584 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.477592 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.477605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.477614 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.578839 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.578878 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.578888 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.578914 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.578923 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.660084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.660141 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.660149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.660161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.660169 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.674352 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.677858 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.677901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.677911 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.677927 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.677937 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.686995 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.689635 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.689659 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.689668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.689683 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.689693 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.697944 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.700262 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.700285 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.700295 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.700308 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.700316 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.708146 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.710560 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.710606 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.710616 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.710626 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.710633 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.718273 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:46 crc kubenswrapper[4681]: E1123 06:45:46.718375 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.719281 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.719301 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.719311 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.719320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.719330 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.821415 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.821479 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.821488 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.821496 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.821503 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.923721 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.923756 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.923767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.923777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:46 crc kubenswrapper[4681]: I1123 06:45:46.923786 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:46Z","lastTransitionTime":"2025-11-23T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.025622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.025645 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.025654 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.025664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.025671 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.127410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.127435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.127445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.127455 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.127484 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.229614 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.229639 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.229648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.229658 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.229667 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.251388 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:47 crc kubenswrapper[4681]: E1123 06:45:47.251601 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.331627 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.331661 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.331671 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.331684 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.331695 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.433690 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.433714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.433721 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.433732 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.433743 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.535492 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.535533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.535542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.535557 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.535570 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.636890 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.636925 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.636936 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.636951 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.636959 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.739007 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.739051 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.739062 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.739076 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.739086 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.841242 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.841270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.841278 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.841293 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.841301 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.943304 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.943337 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.943346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.943359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:47 crc kubenswrapper[4681]: I1123 06:45:47.943368 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:47Z","lastTransitionTime":"2025-11-23T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.045029 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.045061 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.045069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.045082 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.045090 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.146724 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.146759 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.146767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.146781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.146791 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.248311 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.248343 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.248354 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.248383 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.248392 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.251633 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.251649 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.251660 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:48 crc kubenswrapper[4681]: E1123 06:45:48.251716 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:48 crc kubenswrapper[4681]: E1123 06:45:48.251792 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:48 crc kubenswrapper[4681]: E1123 06:45:48.251837 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.350787 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.350818 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.350827 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.350842 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.350867 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.452395 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.452424 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.452431 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.452445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.452454 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.553962 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.554001 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.554010 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.554024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.554033 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.656955 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.656992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.657025 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.657038 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.657100 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.758762 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.758866 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.758949 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.759017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.759082 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.861391 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.861421 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.861429 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.861442 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.861451 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.963115 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.963143 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.963151 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.963161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:48 crc kubenswrapper[4681]: I1123 06:45:48.963170 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:48Z","lastTransitionTime":"2025-11-23T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.065057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.065094 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.065104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.065135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.065145 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.167225 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.167261 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.167270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.167283 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.167292 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.250916 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:49 crc kubenswrapper[4681]: E1123 06:45:49.251052 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.269125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.269161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.269171 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.269185 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.269194 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.370941 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.370969 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.371006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.371019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.371028 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.472877 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.472906 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.472915 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.472926 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.472936 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.575247 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.575281 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.575289 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.575304 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.575313 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.677143 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.677169 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.677179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.677190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.677198 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.779343 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.779367 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.779392 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.779403 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.779410 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.881516 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.881547 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.881557 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.881569 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.881580 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.983594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.983622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.983630 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.983641 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:49 crc kubenswrapper[4681]: I1123 06:45:49.983650 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:49Z","lastTransitionTime":"2025-11-23T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.085774 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.085801 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.085808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.085817 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.085826 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.187697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.187757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.187766 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.187777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.187784 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.251646 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:50 crc kubenswrapper[4681]: E1123 06:45:50.251726 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.251832 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:50 crc kubenswrapper[4681]: E1123 06:45:50.251886 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.251977 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:50 crc kubenswrapper[4681]: E1123 06:45:50.252019 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.289222 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.289269 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.289278 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.289294 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.289306 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.391389 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.391414 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.391425 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.391436 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.391444 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.493196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.493220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.493238 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.493248 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.493255 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.594791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.594816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.594823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.594832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.594841 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.696708 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.696733 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.696742 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.696751 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.696758 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.798194 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.798234 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.798248 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.798264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.798276 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.900358 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.900563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.900640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.900708 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:50 crc kubenswrapper[4681]: I1123 06:45:50.900778 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:50Z","lastTransitionTime":"2025-11-23T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.002699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.002725 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.002733 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.002746 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.002755 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.104808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.104834 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.104842 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.104851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.104859 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.206781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.206967 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.207060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.207142 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.207221 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.251602 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:51 crc kubenswrapper[4681]: E1123 06:45:51.251712 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.309339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.309387 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.309397 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.309410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.309420 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.411959 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.411987 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.411995 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.412006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.412015 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.513887 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.513912 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.513936 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.513948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.513955 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.615138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.615185 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.615196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.615207 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.615216 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.716249 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.716282 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.716292 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.716301 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.716308 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.817711 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.817731 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.817738 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.817746 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.817753 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.919359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.919564 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.919574 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.919588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:51 crc kubenswrapper[4681]: I1123 06:45:51.919596 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:51Z","lastTransitionTime":"2025-11-23T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.021127 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.021156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.021163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.021172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.021179 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.123067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.123101 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.123127 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.123138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.123146 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.224147 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.224196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.224206 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.224218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.224226 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.251489 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:52 crc kubenswrapper[4681]: E1123 06:45:52.251586 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.251596 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:52 crc kubenswrapper[4681]: E1123 06:45:52.251663 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.251688 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:52 crc kubenswrapper[4681]: E1123 06:45:52.251781 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.326140 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.326174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.326184 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.326199 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.326210 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.428517 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.428550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.428558 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.428570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.428579 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.529764 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.529793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.529803 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.529813 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.529821 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.632144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.632170 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.632178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.632187 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.632195 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.734427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.734455 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.734481 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.734491 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.734498 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.836249 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.836310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.836319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.836334 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.836358 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.940292 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.940320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.940327 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.940338 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:52 crc kubenswrapper[4681]: I1123 06:45:52.940345 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:52Z","lastTransitionTime":"2025-11-23T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.041942 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.041972 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.041981 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.041993 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.042002 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.144041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.144065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.144073 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.144085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.144093 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.245859 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.245907 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.245917 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.245931 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.245941 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.251134 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:53 crc kubenswrapper[4681]: E1123 06:45:53.251231 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.262210 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.271638 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.280403 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.288372 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.295415 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.303500 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.310683 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.320505 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1658272b-fc8f-4c75-8537-6e1b863b0f82\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d803964c3c48bbbb674ce8c9ff214415b7f3cb5f545daf2dbe6463c9191e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.329926 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.337028 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.344864 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.347124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.347168 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.347177 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.347191 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.347200 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.355538 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.363560 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.371335 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.383432 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:42Z\\\",\\\"message\\\":\\\"ice\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}\\\\nI1123 06:45:42.859681 6673 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 2.559919ms\\\\nI1123 06:45:42.859693 6673 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nI1123 06:45:42.859720 6673 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:45:42.859598 6673 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.390611 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.397757 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.404931 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.449450 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.449525 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.449535 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.449552 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.449562 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.552109 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.552149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.552160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.552174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.552189 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.654146 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.654431 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.654544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.654622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.654679 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.757006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.757043 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.757051 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.757063 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.757072 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.858707 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.858842 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.858946 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.859029 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.859099 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.960491 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.960622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.960692 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.960751 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:53 crc kubenswrapper[4681]: I1123 06:45:53.960812 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:53Z","lastTransitionTime":"2025-11-23T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.065029 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.065077 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.065088 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.065100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.065107 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.166935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.166963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.166971 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.166982 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.166992 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.251761 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.251809 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.251811 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:54 crc kubenswrapper[4681]: E1123 06:45:54.251862 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:54 crc kubenswrapper[4681]: E1123 06:45:54.252002 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:54 crc kubenswrapper[4681]: E1123 06:45:54.252031 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.269055 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.269077 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.269085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.269095 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.269102 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.370319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.370357 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.370365 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.370377 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.370386 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.471785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.471807 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.471815 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.471823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.471830 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.572856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.572888 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.572897 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.572906 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.572913 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.674911 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.674934 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.674942 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.674952 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.674959 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.776999 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.777039 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.777047 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.777060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.777070 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.878172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.878216 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.878230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.878247 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.878259 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.979757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.979788 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.979796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.979809 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:54 crc kubenswrapper[4681]: I1123 06:45:54.979817 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:54Z","lastTransitionTime":"2025-11-23T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.081320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.081352 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.081361 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.081373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.081380 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.183175 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.183213 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.183223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.183237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.183246 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.251557 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:55 crc kubenswrapper[4681]: E1123 06:45:55.251664 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.284229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.284253 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.284260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.284269 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.284278 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.385498 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.385522 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.385530 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.385540 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.385548 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.487286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.487324 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.487334 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.487349 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.487357 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.589127 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.589156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.589164 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.589178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.589186 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.690816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.690841 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.690849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.690860 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.690868 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.792321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.792543 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.792615 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.792672 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.792727 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.895006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.895038 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.895046 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.895057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.895065 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.996445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.996486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.996494 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.996505 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:55 crc kubenswrapper[4681]: I1123 06:45:55.996513 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:55Z","lastTransitionTime":"2025-11-23T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.097588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.097618 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.097626 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.097644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.097653 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.199171 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.199270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.199334 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.199386 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.199440 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.250935 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:56 crc kubenswrapper[4681]: E1123 06:45:56.251071 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.250971 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:56 crc kubenswrapper[4681]: E1123 06:45:56.251229 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.250948 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:56 crc kubenswrapper[4681]: E1123 06:45:56.251364 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.301631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.301661 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.301673 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.301685 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.301695 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.403906 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.403943 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.403954 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.403966 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.403974 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.505560 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.505596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.505604 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.505617 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.505624 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.607348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.607379 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.607387 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.607398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.607408 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.709129 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.709159 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.709168 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.709180 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.709188 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.811407 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.811445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.811454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.811488 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.811499 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.913539 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.913569 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.913577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.913589 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:56 crc kubenswrapper[4681]: I1123 06:45:56.913597 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:56Z","lastTransitionTime":"2025-11-23T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.014813 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.014844 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.014853 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.014865 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.014873 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.088920 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.088944 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.088954 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.088980 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.088990 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.096987 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.099069 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.099094 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.099103 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.099112 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.099136 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.106375 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.108084 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.108107 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.108114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.108124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.108132 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.114978 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.116700 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.116719 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.116726 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.116735 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.116741 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.130840 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.133386 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.133413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.133422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.133435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.133442 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.157542 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:45:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.157654 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.158700 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.158741 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.158749 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.158761 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.158769 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.251329 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.251421 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.251917 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:45:57 crc kubenswrapper[4681]: E1123 06:45:57.252052 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.259708 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.259731 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.259739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.259748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.259755 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.361304 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.361328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.361337 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.361346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.361354 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.462372 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.462398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.462407 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.462417 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.462442 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.564706 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.564734 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.564742 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.564751 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.564759 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.666344 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.666366 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.666373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.666383 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.666390 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.768427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.768532 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.768605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.768684 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.768738 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.869927 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.869949 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.869957 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.869966 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.869992 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.971209 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.971233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.971241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.971250 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:57 crc kubenswrapper[4681]: I1123 06:45:57.971258 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:57Z","lastTransitionTime":"2025-11-23T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.072451 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.072504 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.072513 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.072522 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.072528 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.173705 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.173729 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.173738 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.173748 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.173756 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.251051 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.251076 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:45:58 crc kubenswrapper[4681]: E1123 06:45:58.251123 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.251134 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:45:58 crc kubenswrapper[4681]: E1123 06:45:58.251198 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:45:58 crc kubenswrapper[4681]: E1123 06:45:58.251250 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.275599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.275624 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.275633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.275643 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.275651 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.377093 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.377115 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.377123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.377133 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.377140 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.478328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.478350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.478357 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.478367 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.478375 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.580008 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.580032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.580040 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.580049 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.580055 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.681171 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.681197 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.681206 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.681217 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.681226 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.782934 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.782963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.782972 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.782982 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.782991 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.884568 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.884597 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.884607 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.884619 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.884627 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.986696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.986721 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.986728 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.986736 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:58 crc kubenswrapper[4681]: I1123 06:45:58.986744 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:58Z","lastTransitionTime":"2025-11-23T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.088419 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.088452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.088482 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.088493 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.088502 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.190387 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.190424 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.190432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.190445 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.190455 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.251107 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:45:59 crc kubenswrapper[4681]: E1123 06:45:59.251238 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.291581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.291601 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.291608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.291618 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.291625 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.392803 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.392827 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.392835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.392844 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.392851 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.494328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.494350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.494358 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.494369 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.494377 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.596186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.596230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.596238 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.596248 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.596255 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.697383 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.697401 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.697409 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.697417 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.697424 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.798664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.798687 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.798695 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.798703 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.798710 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.900544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.900603 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.900613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.900627 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:45:59 crc kubenswrapper[4681]: I1123 06:45:59.900652 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:45:59Z","lastTransitionTime":"2025-11-23T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.002791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.002814 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.002822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.002831 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.002838 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.104534 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.104558 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.104566 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.104578 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.104586 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.210373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.210406 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.210415 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.210426 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.210434 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.251498 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.251563 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:00 crc kubenswrapper[4681]: E1123 06:46:00.251648 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.251667 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:00 crc kubenswrapper[4681]: E1123 06:46:00.251785 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:00 crc kubenswrapper[4681]: E1123 06:46:00.251871 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.312190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.312212 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.312221 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.312231 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.312240 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.414037 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.414068 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.414076 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.414089 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.414098 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.515880 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.515915 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.515923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.515932 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.515940 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.617709 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.617742 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.617767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.617780 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.617790 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.719287 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.719328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.719338 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.719365 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.719387 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.821097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.821170 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.821179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.821190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.821198 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.922948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.922975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.922983 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.923006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:00 crc kubenswrapper[4681]: I1123 06:46:00.923014 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:00Z","lastTransitionTime":"2025-11-23T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.024547 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.024568 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.024576 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.024586 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.024594 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.126537 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.126567 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.126575 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.126602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.126611 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.228391 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.228413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.228421 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.228430 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.228438 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.251062 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:01 crc kubenswrapper[4681]: E1123 06:46:01.251161 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.330013 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.330045 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.330056 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.330067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.330075 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.431891 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.431931 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.431939 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.431950 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.431961 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.533441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.533476 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.533486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.533497 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.533504 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.635135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.635161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.635169 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.635178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.635186 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.737019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.737123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.737191 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.737260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.737328 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.839525 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.839652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.839718 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.839775 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.839828 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.941506 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.941540 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.941548 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.941562 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:01 crc kubenswrapper[4681]: I1123 06:46:01.941571 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:01Z","lastTransitionTime":"2025-11-23T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.043185 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.043301 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.043367 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.043424 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.043518 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.145259 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.145408 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.145514 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.145584 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.145641 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.247192 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.247231 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.247241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.247255 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.247264 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.250971 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:02 crc kubenswrapper[4681]: E1123 06:46:02.251121 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.251032 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:02 crc kubenswrapper[4681]: E1123 06:46:02.251292 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.250995 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:02 crc kubenswrapper[4681]: E1123 06:46:02.251482 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.348866 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.348886 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.348893 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.348909 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.348917 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.450116 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.450147 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.450155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.450168 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.450176 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.551820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.551843 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.551850 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.551861 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.551868 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.653022 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.653048 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.653055 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.653064 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.653071 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.754992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.755013 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.755020 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.755029 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.755036 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.856953 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.856977 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.856988 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.856998 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.857005 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.958605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.958707 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.958797 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.959182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:02 crc kubenswrapper[4681]: I1123 06:46:02.959216 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:02Z","lastTransitionTime":"2025-11-23T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.060657 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.060684 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.060694 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.060706 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.060715 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.161948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.161968 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.161976 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.161985 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.161992 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.251359 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:03 crc kubenswrapper[4681]: E1123 06:46:03.251821 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.262815 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.262840 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.262848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.262858 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.262865 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.263095 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.263100 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2lhx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4094b291-8b0b-43c0-96e9-f08a9ef53c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:35Z\\\",\\\"message\\\":\\\"2025-11-23T06:44:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0\\\\n2025-11-23T06:44:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_afbcfa5d-64e7-4204-9635-6f73dc5640b0 to /host/opt/cni/bin/\\\\n2025-11-23T06:44:50Z [verbose] multus-daemon started\\\\n2025-11-23T06:44:50Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:45:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8k44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2lhx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.272363 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83e4c166-3ace-4773-86cd-fe2bdd216426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039e197d1ef78785cbcf351f1ec80ef09f3c9e61504351fa7a2daa5d1e298bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801f381d344f2aa42a7edddf9af5b4af44baee32eae0c4b176a23e6121c86708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f61009fdb0ae3bfd1f0a7182fd51e496ef36f0f3018b27b968595a8f93a3e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa3b3041022bbdb5e7215db908712f743705fc87019b7efb9ef66860a2d3b33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbba0fd65e440ae607d32a4320a90a40c1ac85ea6cdd55a4b0eaeaffa04aa806\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add793bdf6cc11364f15ce64b78db3314804086fc3b464abcafd1f006d502780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79eda2c75b8833123fcde3824f3456b065f8ac8065a96edefda3785de9112ef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4c7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qgr2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.279041 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1658272b-fc8f-4c75-8537-6e1b863b0f82\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d803964c3c48bbbb674ce8c9ff214415b7f3cb5f545daf2dbe6463c9191e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4502af61097d8c6788f280066fd38f6a94e6aa9ab63b3086f5e5a8a7daaddd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.286225 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.293106 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539dc58c-e752-43c8-bdef-af87528b76f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10301d5307825891afb0c5a8a37015569d3275b9fdbb69135656db11a5cd6ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpnbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wh4gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.304872 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abfb530-b7ac-4724-8e43-d87ef92f1949\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:45:42Z\\\",\\\"message\\\":\\\"ice\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}\\\\nI1123 06:45:42.859681 6673 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 2.559919ms\\\\nI1123 06:45:42.859693 6673 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nI1123 06:45:42.859720 6673 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:45:42.859598 6673 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcbfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l6bqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.312478 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b854b-31d2-4c68-9ad6-400b90548877\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5490e8e70f729d053a63bc2f470cb131a278418f378ca4dbdfee61e6495536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://321567ca2e34099e10b1ba1c668aa9060878c42677cb89d1830b4e53f1a67f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c32b4f9c9cb06e6ebb6dd670cbfcd081b5a8b8e301120f6d8c86f6df4d4c83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76b30d190a072013115448d13033dffda1e5d25b1407537a7277027726d9db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.321989 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575edb497f4f17f170961d9848c67ba62c90331155205502adba409283a9de4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bf4fed77d29c946fcbae36e78d2889c1b17650d6df3666e1f0f53784fe594b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.330676 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a373ee-ee00-4ed1-b208-095d302ac31b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4004d43474bcbff07bbc45d42feefffb8f41e26f0d34bcec50b9c17ea8795a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20d891ac3bcc1513a349fc37f6cceedb64e89b41f92dc098ac6c0ffc074e6cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c007b94529ec5fe2c0606433986e94de3bf63772bd1291e55b4d06080471393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83eb8cfb97a65f9516f9973a491cd60aacd32bf59681f45f60402f8bbf6b1c95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.338059 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86cafc67d4cc7ffeccbb4089e12952e396eeb532c6399e44116154ae411fe923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.344823 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l7wvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095e645f-7b07-4702-87f0-f3b9a6197d9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://730b2d1bf4245510d9c2ab933abbf82d3c7e7d172e6f382b691db27a598fc8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrq5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l7wvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.352604 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.358569 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcxvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8b960e-690a-4772-8373-bce89d00cb17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae5de3ab9fa4043cfbb22d534f986fd7c9318c8e1a7f249cfe50b07f32f04ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2d22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcxvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.365042 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.365067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.365074 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.365085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.365093 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.366368 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"842356bd-1174-4109-a183-b368c16f3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a30a93104ef4dbbe5288684d627e4f4ca7e4477edf99c2012169a7c086900352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b762cf0aee0bbca586dc835d6be4a69921f2f0d6a11262bbea1df14352fd3822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24nlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jvlq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.372920 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv72z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eef1a94-78a8-4389-b1fe-2db3786ba043\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnhcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv72z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.387322 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0ee321-9e16-4c3f-ac01-ab8028fd3966\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0539878fa0390edbdc7c86aef21b9dff26083dfc9dc4ea6e3c97b0dedbd9b44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e552cfd4fec612319aebfda3a9b9f8dafd1b9adab9faaec55c0fec2b5714a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65517742f23c4eab1c86fa85deaf14b3b95029ce9a899a9e8db55f846e105d2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c5628eb925d27cd3c49e8c6e2d4473099a4b78cba21375136d778a64d55c7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362358a297c1d1e1c824f905ab76bce38da517355ccd85141557dd530eeb3c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:44:41Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763880265\\\\\\\\\\\\\\\" (2025-11-23 06:44:24 +0000 UTC to 2025-12-23 06:44:25 +0000 UTC (now=2025-11-23 06:44:41.357059406 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357133 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357142 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1123 06:44:41.357266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1123 06:44:41.357274 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763880275\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763880275\\\\\\\\\\\\\\\" (2025-11-23 05:44:35 +0000 UTC to 2026-11-23 05:44:35 +0000 UTC (now=2025-11-23 06:44:41.357251376 +0000 UTC))\\\\\\\"\\\\nI1123 06:44:41.357281 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1123 06:44:41.357304 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1123 06:44:41.357342 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1123 06:44:41.357375 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1362093559/tls.crt::/tmp/serving-cert-1362093559/tls.key\\\\\\\"\\\\nI1123 06:44:41.357110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1123 06:44:41.357545 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1123 06:44:41.357572 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1123 06:44:41.358565 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://633a5d594f95d5e9f06a0b9f4c42d89a96ea4da867414fa873a60413d67954d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77a9122943fde582e17ecf00d4d76e38986266054411db3c140b56c38082f29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:44:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:44:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.395697 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a827342b1b2cd86b1885af56ad36aa2ac9fd34a35e35e26d788fee09ae65cc08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.403415 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:44:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:03Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.466923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.466977 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.466986 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.467001 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.467009 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.568586 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.568619 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.568644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.568659 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.568668 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.669916 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.669963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.669972 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.669983 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.669991 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.772192 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.772225 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.772233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.772245 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.772254 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.874067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.874092 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.874099 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.874109 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.874118 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.975892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.975930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.975938 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.975947 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:03 crc kubenswrapper[4681]: I1123 06:46:03.975954 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:03Z","lastTransitionTime":"2025-11-23T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.077504 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.077538 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.077568 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.077581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.077591 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.179498 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.179533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.179542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.179573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.179585 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.251155 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.251209 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:04 crc kubenswrapper[4681]: E1123 06:46:04.251231 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.251245 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:04 crc kubenswrapper[4681]: E1123 06:46:04.251313 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:04 crc kubenswrapper[4681]: E1123 06:46:04.251392 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.281702 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.281732 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.281741 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.281770 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.281779 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.383265 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.383292 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.383301 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.383310 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.383318 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.484716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.484741 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.484749 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.484757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.484764 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.587015 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.587041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.587051 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.587061 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.587084 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.688643 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.688682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.688694 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.688709 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.688721 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.790633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.790662 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.790688 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.790699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.790708 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.892676 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.892703 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.892713 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.892723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.892730 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.994418 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.994451 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.994475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.994488 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:04 crc kubenswrapper[4681]: I1123 06:46:04.994498 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:04Z","lastTransitionTime":"2025-11-23T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.096303 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.096333 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.096342 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.096354 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.096363 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.197509 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.197564 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.197577 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.197590 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.197601 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.251202 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:05 crc kubenswrapper[4681]: E1123 06:46:05.251305 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.299289 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.299308 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.299315 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.299325 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.299333 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.400571 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.400594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.400601 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.400611 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.400620 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.463088 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:05 crc kubenswrapper[4681]: E1123 06:46:05.463176 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:05 crc kubenswrapper[4681]: E1123 06:46:05.463209 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs podName:6eef1a94-78a8-4389-b1fe-2db3786ba043 nodeName:}" failed. No retries permitted until 2025-11-23 06:47:09.463198794 +0000 UTC m=+166.532708031 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs") pod "network-metrics-daemon-kv72z" (UID: "6eef1a94-78a8-4389-b1fe-2db3786ba043") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.502034 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.502057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.502065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.502081 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.502122 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.603556 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.603604 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.603613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.603623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.603630 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.705176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.705223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.705233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.705243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.705249 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.807177 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.807220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.807232 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.807248 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.807261 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.909297 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.909328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.909336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.909348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:05 crc kubenswrapper[4681]: I1123 06:46:05.909356 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:05Z","lastTransitionTime":"2025-11-23T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.011356 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.011396 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.011404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.011416 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.011424 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.113564 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.113595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.113603 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.113613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.113621 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.215721 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.215757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.215771 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.215788 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.215799 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.250843 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:06 crc kubenswrapper[4681]: E1123 06:46:06.250950 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.250868 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:06 crc kubenswrapper[4681]: E1123 06:46:06.251000 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.250848 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:06 crc kubenswrapper[4681]: E1123 06:46:06.251134 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.317785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.317814 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.317823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.317851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.317860 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.419572 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.419599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.419607 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.419617 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.419626 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.521235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.521260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.521270 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.521282 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.521291 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.623210 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.623235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.623243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.623254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.623262 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.724596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.724633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.724644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.724655 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.724664 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.826128 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.826155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.826163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.826173 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.826180 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.927579 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.927611 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.927619 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.927627 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:06 crc kubenswrapper[4681]: I1123 06:46:06.927634 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:06Z","lastTransitionTime":"2025-11-23T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.029356 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.029390 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.029399 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.029412 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.029420 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.131354 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.131385 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.131398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.131409 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.131420 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.233612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.233633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.233641 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.233652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.233660 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.250996 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:07 crc kubenswrapper[4681]: E1123 06:46:07.251079 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.334954 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.334996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.335006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.335018 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.335024 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.368065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.368167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.368231 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.368291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.368352 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: E1123 06:46:07.376534 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.378527 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.378554 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.378563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.378572 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.378580 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: E1123 06:46:07.385895 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.387895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.387929 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.387937 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.387947 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.387954 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: E1123 06:46:07.395698 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.397681 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.397711 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.397720 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.397729 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.397736 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: E1123 06:46:07.404877 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.406848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.406876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.406888 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.406896 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.406903 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: E1123 06:46:07.414047 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a407e0b2-9c3a-4221-8e9d-4076c1148487\\\",\\\"systemUUID\\\":\\\"a4227fe6-6af4-43a0-a77f-7b8ab03d3548\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:07 crc kubenswrapper[4681]: E1123 06:46:07.414148 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.436230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.436256 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.436264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.436273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.436280 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.537822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.537848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.537857 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.537869 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.537876 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.639686 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.639716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.639723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.639734 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.639743 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.741563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.741592 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.741602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.741616 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.741625 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.843446 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.843509 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.843519 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.843530 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.843538 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.945175 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.945225 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.945234 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.945247 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:07 crc kubenswrapper[4681]: I1123 06:46:07.945256 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:07Z","lastTransitionTime":"2025-11-23T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.046603 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.046654 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.046662 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.046673 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.046682 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.150473 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.150507 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.150519 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.150531 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.150541 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251004 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251188 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251338 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251378 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:46:08 crc kubenswrapper[4681]: E1123 06:46:08.251385 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:08 crc kubenswrapper[4681]: E1123 06:46:08.251486 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:08 crc kubenswrapper[4681]: E1123 06:46:08.251501 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:46:08 crc kubenswrapper[4681]: E1123 06:46:08.251550 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251864 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251871 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251879 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.251887 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.353291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.353327 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.353335 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.353344 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.353351 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.455175 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.455199 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.455208 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.455219 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.455227 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.557254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.557305 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.557313 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.557324 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.557332 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.658692 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.658726 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.658738 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.658752 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.658761 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.760655 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.760682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.760690 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.760700 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.760706 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.861796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.861826 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.861834 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.861848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.861857 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.963380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.963407 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.963417 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.963427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:08 crc kubenswrapper[4681]: I1123 06:46:08.963437 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:08Z","lastTransitionTime":"2025-11-23T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.065351 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.065375 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.065383 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.065394 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.065403 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.166658 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.166679 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.166686 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.166697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.166705 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.251556 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:09 crc kubenswrapper[4681]: E1123 06:46:09.251758 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.268393 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.268527 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.268601 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.268664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.268729 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.370319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.370347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.370356 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.370368 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.370376 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.471963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.471982 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.471990 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.472000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.472008 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.573237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.573267 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.573275 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.573284 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.573291 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.675095 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.675130 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.675139 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.675148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.675156 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.776544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.776573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.776581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.776590 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.776597 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.878000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.878024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.878032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.878041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.878048 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.979091 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.979144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.979153 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.979167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:09 crc kubenswrapper[4681]: I1123 06:46:09.979193 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:09Z","lastTransitionTime":"2025-11-23T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.081196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.081221 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.081229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.081239 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.081246 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.182850 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.182885 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.182895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.182907 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.182915 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.251422 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:10 crc kubenswrapper[4681]: E1123 06:46:10.251555 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.251979 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:10 crc kubenswrapper[4681]: E1123 06:46:10.252024 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.252106 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:10 crc kubenswrapper[4681]: E1123 06:46:10.252495 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.284491 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.284520 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.284530 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.284542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.284551 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.385843 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.385870 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.385878 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.385887 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.385894 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.487442 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.487486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.487495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.487504 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.487513 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.588782 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.588813 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.588822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.588832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.588841 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.690747 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.690781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.690789 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.690801 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.690809 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.793095 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.793159 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.793170 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.793182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.793192 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.894237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.894266 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.894277 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.894289 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.894298 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.995953 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.995978 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.995986 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.995997 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:10 crc kubenswrapper[4681]: I1123 06:46:10.996003 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:10Z","lastTransitionTime":"2025-11-23T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.097724 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.097836 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.097902 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.097971 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.098036 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.200128 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.200203 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.200214 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.200225 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.200234 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.251896 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:11 crc kubenswrapper[4681]: E1123 06:46:11.251997 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.302664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.302688 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.302696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.302705 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.302712 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.404553 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.404573 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.404581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.404591 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.404598 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.506560 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.506592 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.506601 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.506609 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.506616 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.607885 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.607949 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.607967 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.607984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.607995 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.709129 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.709153 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.709161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.709172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.709180 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.811023 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.811182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.811269 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.811348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.811412 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.912689 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.912778 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.912836 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.912898 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:11 crc kubenswrapper[4681]: I1123 06:46:11.912972 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:11Z","lastTransitionTime":"2025-11-23T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.015075 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.015104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.015112 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.015123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.015131 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.116264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.116486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.116496 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.116508 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.116516 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.218410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.218444 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.218452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.218478 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.218487 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.251662 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.251701 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:12 crc kubenswrapper[4681]: E1123 06:46:12.251760 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.251667 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:12 crc kubenswrapper[4681]: E1123 06:46:12.251974 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:12 crc kubenswrapper[4681]: E1123 06:46:12.251855 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.320500 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.320525 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.320533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.320544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.320553 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.422414 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.422447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.422480 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.422492 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.422501 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.524105 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.524138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.524146 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.524163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.524173 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.626094 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.626116 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.626125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.626135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.626145 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.727286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.727309 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.727318 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.727327 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.727335 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.829182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.829206 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.829214 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.829223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.829231 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.931179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.931204 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.931213 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.931224 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:12 crc kubenswrapper[4681]: I1123 06:46:12.931233 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:12Z","lastTransitionTime":"2025-11-23T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.032866 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.032889 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.032897 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.032906 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.032913 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.134656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.134679 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.134687 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.134697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.134704 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.235980 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.236010 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.236019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.236032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.236040 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.251230 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:13 crc kubenswrapper[4681]: E1123 06:46:13.251320 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.274663 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=36.274653472 podStartE2EDuration="36.274653472s" podCreationTimestamp="2025-11-23 06:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.263698538 +0000 UTC m=+110.333207775" watchObservedRunningTime="2025-11-23 06:46:13.274653472 +0000 UTC m=+110.344162709" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.281997 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podStartSLOduration=85.281989309 podStartE2EDuration="1m25.281989309s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.281877236 +0000 UTC m=+110.351386474" watchObservedRunningTime="2025-11-23 06:46:13.281989309 +0000 UTC m=+110.351498546" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.304778 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qgr2n" podStartSLOduration=85.304762955 podStartE2EDuration="1m25.304762955s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.303915217 +0000 UTC m=+110.373424453" watchObservedRunningTime="2025-11-23 06:46:13.304762955 +0000 UTC m=+110.374272192" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.304924 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2lhx5" podStartSLOduration=85.304920893 podStartE2EDuration="1m25.304920893s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.292490587 +0000 UTC m=+110.361999813" watchObservedRunningTime="2025-11-23 06:46:13.304920893 +0000 UTC m=+110.374430130" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.325407 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=10.325394641 podStartE2EDuration="10.325394641s" podCreationTimestamp="2025-11-23 06:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.323363952 +0000 UTC m=+110.392873189" watchObservedRunningTime="2025-11-23 06:46:13.325394641 +0000 UTC m=+110.394903878" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.334627 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.33461345 podStartE2EDuration="1m25.33461345s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.334495438 +0000 UTC m=+110.404004675" watchObservedRunningTime="2025-11-23 06:46:13.33461345 +0000 UTC m=+110.404122688" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.337160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.337273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.337330 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.337406 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.337477 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.369217 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.369203476 podStartE2EDuration="57.369203476s" podCreationTimestamp="2025-11-23 06:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.369172077 +0000 UTC m=+110.438681314" watchObservedRunningTime="2025-11-23 06:46:13.369203476 +0000 UTC m=+110.438712713" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.385906 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l7wvz" podStartSLOduration=86.385895612 podStartE2EDuration="1m26.385895612s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.385369038 +0000 UTC m=+110.454878275" watchObservedRunningTime="2025-11-23 06:46:13.385895612 +0000 UTC m=+110.455404848" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.411524 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.411509526 podStartE2EDuration="1m31.411509526s" podCreationTimestamp="2025-11-23 06:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.403149377 +0000 UTC m=+110.472658614" watchObservedRunningTime="2025-11-23 06:46:13.411509526 +0000 UTC m=+110.481018762" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.438496 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jcxvt" podStartSLOduration=86.438481351 podStartE2EDuration="1m26.438481351s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.437502705 +0000 UTC m=+110.507011942" watchObservedRunningTime="2025-11-23 06:46:13.438481351 +0000 UTC m=+110.507990588" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.439565 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.439593 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.439601 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.439612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.439640 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.446124 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jvlq6" podStartSLOduration=85.446114128 podStartE2EDuration="1m25.446114128s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:13.445928838 +0000 UTC m=+110.515438076" watchObservedRunningTime="2025-11-23 06:46:13.446114128 +0000 UTC m=+110.515623355" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.541454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.541614 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.541696 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.541755 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.541818 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.643704 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.643975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.644057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.644165 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.644221 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.746404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.746439 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.746448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.746478 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.746488 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.848054 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.848090 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.848100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.848121 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.848132 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.949768 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.949799 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.949808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.949825 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:13 crc kubenswrapper[4681]: I1123 06:46:13.949834 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:13Z","lastTransitionTime":"2025-11-23T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.051665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.051715 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.051723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.051734 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.051744 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.153677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.153714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.153723 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.153739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.153747 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.251579 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.251593 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:14 crc kubenswrapper[4681]: E1123 06:46:14.251668 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.251693 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:14 crc kubenswrapper[4681]: E1123 06:46:14.251750 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:14 crc kubenswrapper[4681]: E1123 06:46:14.251851 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.255952 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.255977 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.256006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.256017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.256026 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.358196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.358223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.358231 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.358241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.358249 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.459783 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.459813 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.459823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.459833 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.459841 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.561623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.561661 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.561671 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.561681 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.561690 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.663011 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.663031 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.663039 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.663050 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.663058 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.765031 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.765053 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.765061 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.765070 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.765078 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.866968 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.866994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.867002 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.867011 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.867019 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.968606 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.968631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.968638 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.968665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4681]: I1123 06:46:14.968672 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.070600 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.070691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.070781 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.070849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.070906 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.172558 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.172686 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.172752 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.172822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.172879 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.251472 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:15 crc kubenswrapper[4681]: E1123 06:46:15.251673 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.274319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.274354 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.274363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.274371 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.274379 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.375776 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.375802 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.375810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.375822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.375830 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.477187 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.477214 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.477224 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.477233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.477240 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.579286 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.579309 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.579317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.579328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.579336 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.681085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.681127 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.681135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.681144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.681151 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.783190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.783221 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.783230 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.783242 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.783254 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.884295 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.884341 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.884348 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.884357 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.884363 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.985448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.985488 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.985497 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.985506 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4681]: I1123 06:46:15.985512 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.086500 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.086521 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.086528 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.086537 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.086544 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.188076 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.188100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.188124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.188133 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.188140 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.250718 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:16 crc kubenswrapper[4681]: E1123 06:46:16.250789 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.250851 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:16 crc kubenswrapper[4681]: E1123 06:46:16.250968 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.251075 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:16 crc kubenswrapper[4681]: E1123 06:46:16.251203 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.290027 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.290050 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.290058 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.290068 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.290075 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.391901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.392078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.392235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.392371 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.392524 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.494252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.494679 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.494754 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.494810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.494869 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.596814 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.596847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.596858 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.596870 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.596878 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.698852 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.698876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.698885 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.698895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.698903 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.800317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.800487 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.800569 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.800640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.800719 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.902260 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.902311 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.902320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.902333 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4681]: I1123 06:46:16.902341 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.004231 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.004259 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.004268 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.004281 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.004289 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.106427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.106453 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.106495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.106506 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.106514 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.207742 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.207772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.207780 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.207794 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.207804 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.251489 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:17 crc kubenswrapper[4681]: E1123 06:46:17.251575 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.309567 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.309594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.309602 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.309613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.309621 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.411413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.411520 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.411592 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.411665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.411722 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.513233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.513264 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.513273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.513285 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.513294 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.614798 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.614827 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.614835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.614852 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.614860 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.717041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.717065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.717073 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.717083 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.717091 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.752115 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.752143 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.752152 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.752163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.752171 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.779077 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9"] Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.779546 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.781046 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.781256 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.781600 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.781812 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.854261 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a67f149f-93e1-450b-821c-e1124a771278-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.854308 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a67f149f-93e1-450b-821c-e1124a771278-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.854324 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a67f149f-93e1-450b-821c-e1124a771278-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.854340 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67f149f-93e1-450b-821c-e1124a771278-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.854355 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a67f149f-93e1-450b-821c-e1124a771278-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.955249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a67f149f-93e1-450b-821c-e1124a771278-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.955292 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a67f149f-93e1-450b-821c-e1124a771278-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.955311 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a67f149f-93e1-450b-821c-e1124a771278-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.955329 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67f149f-93e1-450b-821c-e1124a771278-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.955361 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a67f149f-93e1-450b-821c-e1124a771278-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.955402 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a67f149f-93e1-450b-821c-e1124a771278-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.955408 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a67f149f-93e1-450b-821c-e1124a771278-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.956528 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a67f149f-93e1-450b-821c-e1124a771278-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.959410 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67f149f-93e1-450b-821c-e1124a771278-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:17 crc kubenswrapper[4681]: I1123 06:46:17.968200 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a67f149f-93e1-450b-821c-e1124a771278-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9xrh9\" (UID: \"a67f149f-93e1-450b-821c-e1124a771278\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:18 crc kubenswrapper[4681]: I1123 06:46:18.089533 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" Nov 23 06:46:18 crc kubenswrapper[4681]: I1123 06:46:18.251566 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:18 crc kubenswrapper[4681]: I1123 06:46:18.251609 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:18 crc kubenswrapper[4681]: E1123 06:46:18.251644 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:18 crc kubenswrapper[4681]: E1123 06:46:18.251704 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:18 crc kubenswrapper[4681]: I1123 06:46:18.251566 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:18 crc kubenswrapper[4681]: E1123 06:46:18.251772 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:18 crc kubenswrapper[4681]: I1123 06:46:18.664393 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" event={"ID":"a67f149f-93e1-450b-821c-e1124a771278","Type":"ContainerStarted","Data":"0e9b7696dee71748b0954eab76fe97381685eb7ec3c4b6093ed3ffb2a1d73598"} Nov 23 06:46:18 crc kubenswrapper[4681]: I1123 06:46:18.664431 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" event={"ID":"a67f149f-93e1-450b-821c-e1124a771278","Type":"ContainerStarted","Data":"5e53318970e06c8e5592aacd03a735f9009312cfc387b398748f5e6bf9b8e13b"} Nov 23 06:46:18 crc kubenswrapper[4681]: I1123 06:46:18.673494 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9xrh9" podStartSLOduration=91.673483432 podStartE2EDuration="1m31.673483432s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:18.673029456 +0000 UTC m=+115.742538693" watchObservedRunningTime="2025-11-23 06:46:18.673483432 +0000 UTC m=+115.742992670" Nov 23 06:46:19 crc kubenswrapper[4681]: I1123 06:46:19.251767 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:19 crc kubenswrapper[4681]: E1123 06:46:19.251855 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:20 crc kubenswrapper[4681]: I1123 06:46:20.250920 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:20 crc kubenswrapper[4681]: E1123 06:46:20.251216 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:20 crc kubenswrapper[4681]: I1123 06:46:20.250936 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:20 crc kubenswrapper[4681]: E1123 06:46:20.251280 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:20 crc kubenswrapper[4681]: I1123 06:46:20.250936 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:20 crc kubenswrapper[4681]: E1123 06:46:20.251334 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:20 crc kubenswrapper[4681]: I1123 06:46:20.251427 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:46:20 crc kubenswrapper[4681]: E1123 06:46:20.251569 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l6bqb_openshift-ovn-kubernetes(1abfb530-b7ac-4724-8e43-d87ef92f1949)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" Nov 23 06:46:21 crc kubenswrapper[4681]: I1123 06:46:21.251497 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:21 crc kubenswrapper[4681]: E1123 06:46:21.251588 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:21 crc kubenswrapper[4681]: I1123 06:46:21.671653 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/1.log" Nov 23 06:46:21 crc kubenswrapper[4681]: I1123 06:46:21.672007 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/0.log" Nov 23 06:46:21 crc kubenswrapper[4681]: I1123 06:46:21.672041 4681 generic.go:334] "Generic (PLEG): container finished" podID="4094b291-8b0b-43c0-96e9-f08a9ef53c8b" containerID="85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4" exitCode=1 Nov 23 06:46:21 crc kubenswrapper[4681]: I1123 06:46:21.672063 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerDied","Data":"85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4"} Nov 23 06:46:21 crc kubenswrapper[4681]: I1123 06:46:21.672086 4681 scope.go:117] "RemoveContainer" containerID="c5727a49cd7333b260149719be661d1dd427357e3e8e08a3680476dc175b8066" Nov 23 06:46:21 crc kubenswrapper[4681]: I1123 06:46:21.672395 4681 scope.go:117] "RemoveContainer" containerID="85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4" Nov 23 06:46:21 crc kubenswrapper[4681]: E1123 06:46:21.672563 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2lhx5_openshift-multus(4094b291-8b0b-43c0-96e9-f08a9ef53c8b)\"" pod="openshift-multus/multus-2lhx5" podUID="4094b291-8b0b-43c0-96e9-f08a9ef53c8b" Nov 23 06:46:22 crc kubenswrapper[4681]: I1123 06:46:22.251277 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:22 crc kubenswrapper[4681]: I1123 06:46:22.251330 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:22 crc kubenswrapper[4681]: E1123 06:46:22.251359 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:22 crc kubenswrapper[4681]: I1123 06:46:22.251279 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:22 crc kubenswrapper[4681]: E1123 06:46:22.251405 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:22 crc kubenswrapper[4681]: E1123 06:46:22.251479 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:22 crc kubenswrapper[4681]: I1123 06:46:22.675191 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/1.log" Nov 23 06:46:23 crc kubenswrapper[4681]: E1123 06:46:23.213851 4681 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 23 06:46:23 crc kubenswrapper[4681]: I1123 06:46:23.252368 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:23 crc kubenswrapper[4681]: E1123 06:46:23.252479 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:23 crc kubenswrapper[4681]: E1123 06:46:23.318502 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:46:24 crc kubenswrapper[4681]: I1123 06:46:24.251038 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:24 crc kubenswrapper[4681]: I1123 06:46:24.251054 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:24 crc kubenswrapper[4681]: E1123 06:46:24.251122 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:24 crc kubenswrapper[4681]: I1123 06:46:24.251142 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:24 crc kubenswrapper[4681]: E1123 06:46:24.251199 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:24 crc kubenswrapper[4681]: E1123 06:46:24.251254 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:25 crc kubenswrapper[4681]: I1123 06:46:25.251535 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:25 crc kubenswrapper[4681]: E1123 06:46:25.251646 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:26 crc kubenswrapper[4681]: I1123 06:46:26.250805 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:26 crc kubenswrapper[4681]: E1123 06:46:26.250897 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:26 crc kubenswrapper[4681]: I1123 06:46:26.250805 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:26 crc kubenswrapper[4681]: E1123 06:46:26.251005 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:26 crc kubenswrapper[4681]: I1123 06:46:26.251193 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:26 crc kubenswrapper[4681]: E1123 06:46:26.251326 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:27 crc kubenswrapper[4681]: I1123 06:46:27.251125 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:27 crc kubenswrapper[4681]: E1123 06:46:27.251240 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:28 crc kubenswrapper[4681]: I1123 06:46:28.250835 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:28 crc kubenswrapper[4681]: E1123 06:46:28.250950 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:28 crc kubenswrapper[4681]: I1123 06:46:28.250852 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:28 crc kubenswrapper[4681]: I1123 06:46:28.250835 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:28 crc kubenswrapper[4681]: E1123 06:46:28.251080 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:28 crc kubenswrapper[4681]: E1123 06:46:28.251150 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:28 crc kubenswrapper[4681]: E1123 06:46:28.319748 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:46:29 crc kubenswrapper[4681]: I1123 06:46:29.251492 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:29 crc kubenswrapper[4681]: E1123 06:46:29.251586 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:30 crc kubenswrapper[4681]: I1123 06:46:30.250721 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:30 crc kubenswrapper[4681]: I1123 06:46:30.250756 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:30 crc kubenswrapper[4681]: E1123 06:46:30.250807 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:30 crc kubenswrapper[4681]: I1123 06:46:30.250814 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:30 crc kubenswrapper[4681]: E1123 06:46:30.250869 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:30 crc kubenswrapper[4681]: E1123 06:46:30.250925 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:31 crc kubenswrapper[4681]: I1123 06:46:31.251626 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:31 crc kubenswrapper[4681]: E1123 06:46:31.251753 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:32 crc kubenswrapper[4681]: I1123 06:46:32.250907 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:32 crc kubenswrapper[4681]: I1123 06:46:32.250933 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:32 crc kubenswrapper[4681]: E1123 06:46:32.251004 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:32 crc kubenswrapper[4681]: I1123 06:46:32.251024 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:32 crc kubenswrapper[4681]: E1123 06:46:32.251082 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:32 crc kubenswrapper[4681]: E1123 06:46:32.251137 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:33 crc kubenswrapper[4681]: I1123 06:46:33.251804 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:33 crc kubenswrapper[4681]: E1123 06:46:33.252836 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:33 crc kubenswrapper[4681]: E1123 06:46:33.320062 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:46:34 crc kubenswrapper[4681]: I1123 06:46:34.251186 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:34 crc kubenswrapper[4681]: I1123 06:46:34.251216 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:34 crc kubenswrapper[4681]: I1123 06:46:34.251234 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:34 crc kubenswrapper[4681]: E1123 06:46:34.251288 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:34 crc kubenswrapper[4681]: E1123 06:46:34.251359 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:34 crc kubenswrapper[4681]: E1123 06:46:34.251511 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:34 crc kubenswrapper[4681]: I1123 06:46:34.251625 4681 scope.go:117] "RemoveContainer" containerID="85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4" Nov 23 06:46:34 crc kubenswrapper[4681]: I1123 06:46:34.699540 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/1.log" Nov 23 06:46:34 crc kubenswrapper[4681]: I1123 06:46:34.699714 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerStarted","Data":"dcf9640496fa8d1e0179de62ae7b6c308f4bb9fc5abaeebd84239dba5e101a53"} Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.251725 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:35 crc kubenswrapper[4681]: E1123 06:46:35.251831 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.252375 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.703229 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/3.log" Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.705803 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerStarted","Data":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.706160 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.954283 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podStartSLOduration=107.954267197 podStartE2EDuration="1m47.954267197s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:35.72576284 +0000 UTC m=+132.795272078" watchObservedRunningTime="2025-11-23 06:46:35.954267197 +0000 UTC m=+133.023776433" Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.955157 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kv72z"] Nov 23 06:46:35 crc kubenswrapper[4681]: I1123 06:46:35.955267 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:35 crc kubenswrapper[4681]: E1123 06:46:35.955358 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:36 crc kubenswrapper[4681]: I1123 06:46:36.251054 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:36 crc kubenswrapper[4681]: E1123 06:46:36.251155 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:36 crc kubenswrapper[4681]: I1123 06:46:36.251218 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:36 crc kubenswrapper[4681]: I1123 06:46:36.251226 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:36 crc kubenswrapper[4681]: E1123 06:46:36.251344 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:36 crc kubenswrapper[4681]: E1123 06:46:36.251431 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:37 crc kubenswrapper[4681]: I1123 06:46:37.251049 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:37 crc kubenswrapper[4681]: E1123 06:46:37.251203 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv72z" podUID="6eef1a94-78a8-4389-b1fe-2db3786ba043" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.251719 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:38 crc kubenswrapper[4681]: E1123 06:46:38.251981 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.251769 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:38 crc kubenswrapper[4681]: E1123 06:46:38.252050 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.251734 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:38 crc kubenswrapper[4681]: E1123 06:46:38.252096 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.349046 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.373283 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7f7c"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.373694 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.374224 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.374544 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.376073 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.376409 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.376744 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.376958 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.377036 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.377215 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.377735 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.378916 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-59rqt"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.379283 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.379524 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.379861 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.383150 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.387845 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.387916 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.392560 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.393952 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.394217 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.394351 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.394692 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.395847 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.395861 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.395850 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396159 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396310 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396373 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396433 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396448 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396484 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396599 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396612 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396665 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396687 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396720 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396754 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nth4c"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396946 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.396973 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397087 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397092 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397411 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qkccb"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397140 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397659 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397676 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qkccb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397298 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397832 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397895 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.397914 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398046 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398103 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398312 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398568 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398725 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398760 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398842 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.398910 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.400342 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.400376 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.400516 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.400565 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.400519 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.400735 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.400992 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.401116 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.401453 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42z7r"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.401821 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.402824 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nk54m"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.403208 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.403246 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cxwjl"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.407686 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.409887 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pmxqk"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.410402 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.412536 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.412633 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.412683 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.412705 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.413289 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.413317 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.413883 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.414387 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429010 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429239 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429420 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429443 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429597 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429663 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429906 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.429971 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.430090 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.430176 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.430266 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.430601 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cq2gd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.430787 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.430877 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.431198 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.431375 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.431495 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.431600 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.431698 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.431811 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.431882 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432155 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432178 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432184 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432251 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432259 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432252 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432365 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432436 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.432605 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.435139 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.435956 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.436958 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c2pf5"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.437236 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.438293 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.439667 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ljsqd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.439992 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.440212 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.440253 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.440218 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7swg"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.440408 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.441516 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.443978 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.444409 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.444665 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.444943 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.445279 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.445672 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.446534 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.446926 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.449652 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.449992 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.450225 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.450525 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.450817 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.450957 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.452501 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8mv9d"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.456644 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.457746 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.458965 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.463189 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.468542 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lqxzb"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.469121 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.469695 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.469916 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.470115 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.470141 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.470732 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.470871 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.470981 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.471059 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.471205 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.471546 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.471692 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.471816 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.472714 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.473067 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.473222 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.473341 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.473488 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.473888 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.474058 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.475504 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.490426 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.492160 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.492271 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.492311 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.492690 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5zj2"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.492992 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.493178 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.493744 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.495319 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-b7ms9"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.495641 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.495973 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.495986 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.496867 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.497227 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.500380 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7f7c"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.501400 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.503596 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.506344 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.507709 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.509452 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cxwjl"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.510130 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42z7r"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.515427 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.515562 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.515578 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-59rqt"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.518147 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nth4c"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.518188 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qkccb"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.518200 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hckp7"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.518689 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.521170 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pmxqk"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.521197 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.522752 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.524650 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.525729 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.526330 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nk54m"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.526901 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cq2gd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.528080 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.528858 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8mv9d"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.530347 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7swg"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.531970 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-service-ca\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.531998 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532018 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-audit-policies\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532041 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76f32e91-6759-4608-9f24-88ed1d5d769e-machine-approver-tls\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532055 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-profile-collector-cert\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532068 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-serving-cert\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532082 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8450c87-7b9b-47cf-86ce-145ef517f494-serving-cert\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532105 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532119 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73787c8-407a-4e02-8c50-7205b96c76b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532134 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532159 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrshg\" (UniqueName: \"kubernetes.io/projected/2fcb132e-fadc-4c84-a103-2e821e006bfa-kube-api-access-mrshg\") pod \"cluster-samples-operator-665b6dd947-gmtff\" (UID: \"2fcb132e-fadc-4c84-a103-2e821e006bfa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532173 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-service-ca-bundle\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532187 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ed3437e-7360-4cc6-a4d5-b54d2f761945-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532200 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f396efd2-0a8e-44bb-98c8-ad10c3383cef-audit-dir\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532216 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532231 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a73787c8-407a-4e02-8c50-7205b96c76b8-images\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532245 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-config\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532328 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f249707f-34f7-4964-9cd9-9c83df2f3056-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532343 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-config\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532358 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/862e3345-8b2c-4009-b50c-0fd6025ac9dc-trusted-ca\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-audit-dir\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532402 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx44d\" (UniqueName: \"kubernetes.io/projected/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-kube-api-access-rx44d\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532442 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnt9\" (UniqueName: \"kubernetes.io/projected/c0e3f5d0-037c-48b9-888f-375c10e5f269-kube-api-access-hmnt9\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532491 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/373b7163-d058-419c-b4c5-b76a80f78dfa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-config\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532537 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fcb132e-fadc-4c84-a103-2e821e006bfa-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gmtff\" (UID: \"2fcb132e-fadc-4c84-a103-2e821e006bfa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532550 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxtt\" (UniqueName: \"kubernetes.io/projected/dae5706a-d59e-40ba-9546-7bed3f4f77aa-kube-api-access-tzxtt\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532569 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-oauth-serving-cert\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532588 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-metrics-certs\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532621 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7wz\" (UniqueName: \"kubernetes.io/projected/862e3345-8b2c-4009-b50c-0fd6025ac9dc-kube-api-access-xx7wz\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532649 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlxf\" (UniqueName: \"kubernetes.io/projected/3d6df87c-65e5-4899-ad0a-22e9818da7d6-kube-api-access-rrlxf\") pod \"ingress-canary-hckp7\" (UID: \"3d6df87c-65e5-4899-ad0a-22e9818da7d6\") " pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532668 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25f215c0-701b-4a75-9c19-6deeab862309-signing-key\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532691 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-encryption-config\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532711 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/28a78e7d-ae79-4791-aa1f-6398f611c561-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532728 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1946d763-61f9-468c-84d1-15f635ae5aa8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532747 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25865701-6601-400a-8cca-606a3cabcc5d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532766 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f32e91-6759-4608-9f24-88ed1d5d769e-config\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532782 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-oauth-config\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532795 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-client-ca\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532810 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532823 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgjb\" (UniqueName: \"kubernetes.io/projected/28a78e7d-ae79-4791-aa1f-6398f611c561-kube-api-access-2vgjb\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532841 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2h8\" (UniqueName: \"kubernetes.io/projected/a8450c87-7b9b-47cf-86ce-145ef517f494-kube-api-access-dk2h8\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532860 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862e3345-8b2c-4009-b50c-0fd6025ac9dc-config\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532874 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-serving-cert\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532892 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st576\" (UniqueName: \"kubernetes.io/projected/cd4e2b49-bdc7-425a-877f-74938cd8a472-kube-api-access-st576\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532906 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-etcd-client\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532924 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d6df87c-65e5-4899-ad0a-22e9818da7d6-cert\") pod \"ingress-canary-hckp7\" (UID: \"3d6df87c-65e5-4899-ad0a-22e9818da7d6\") " pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.532943 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533073 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd4e2b49-bdc7-425a-877f-74938cd8a472-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533097 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2b9e38-a7cf-43bb-aa89-861571046aee-serving-cert\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533111 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-encryption-config\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533133 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2lqn\" (UniqueName: \"kubernetes.io/projected/a73787c8-407a-4e02-8c50-7205b96c76b8-kube-api-access-k2lqn\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533342 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76f32e91-6759-4608-9f24-88ed1d5d769e-auth-proxy-config\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533366 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzc9\" (UniqueName: \"kubernetes.io/projected/e5135d02-57f8-48f3-96d3-af0fb70e8ac3-kube-api-access-zgzc9\") pod \"downloads-7954f5f757-qkccb\" (UID: \"e5135d02-57f8-48f3-96d3-af0fb70e8ac3\") " pod="openshift-console/downloads-7954f5f757-qkccb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533384 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-etcd-client\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533396 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74c2583d-61ac-4c6e-8cb5-11427314ecad-metrics-tls\") pod \"dns-operator-744455d44c-8mv9d\" (UID: \"74c2583d-61ac-4c6e-8cb5-11427314ecad\") " pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533412 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpj5j\" (UniqueName: \"kubernetes.io/projected/76f32e91-6759-4608-9f24-88ed1d5d769e-kube-api-access-zpj5j\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533424 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-client-ca\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533437 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25f215c0-701b-4a75-9c19-6deeab862309-signing-cabundle\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533483 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/862e3345-8b2c-4009-b50c-0fd6025ac9dc-serving-cert\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533497 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ed3437e-7360-4cc6-a4d5-b54d2f761945-proxy-tls\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533517 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-trusted-ca-bundle\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533531 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f249707f-34f7-4964-9cd9-9c83df2f3056-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533545 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-image-import-ca\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533557 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-serving-cert\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533570 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a78e7d-ae79-4791-aa1f-6398f611c561-serving-cert\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533583 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f396efd2-0a8e-44bb-98c8-ad10c3383cef-node-pullsecrets\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533595 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-audit\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533608 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-srv-cert\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533623 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/373b7163-d058-419c-b4c5-b76a80f78dfa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533638 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882fc762-16ff-41a8-917d-e6b327a4adb5-secret-volume\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533651 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c6f4ba4-aae8-4308-be38-b74b07116955-service-ca-bundle\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533729 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1946d763-61f9-468c-84d1-15f635ae5aa8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533764 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4lm\" (UniqueName: \"kubernetes.io/projected/a57b9495-9a8d-4ec8-8a4d-92220d911386-kube-api-access-7j4lm\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533781 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrfr\" (UniqueName: \"kubernetes.io/projected/25f215c0-701b-4a75-9c19-6deeab862309-kube-api-access-rbrfr\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533811 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7526x\" (UniqueName: \"kubernetes.io/projected/882fc762-16ff-41a8-917d-e6b327a4adb5-kube-api-access-7526x\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533839 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533861 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgfc\" (UniqueName: \"kubernetes.io/projected/7d2b9e38-a7cf-43bb-aa89-861571046aee-kube-api-access-njgfc\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533882 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2q2\" (UniqueName: \"kubernetes.io/projected/1946d763-61f9-468c-84d1-15f635ae5aa8-kube-api-access-5z2q2\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533903 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533928 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd4e2b49-bdc7-425a-877f-74938cd8a472-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533946 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25865701-6601-400a-8cca-606a3cabcc5d-proxy-tls\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533968 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533982 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfb8w\" (UniqueName: \"kubernetes.io/projected/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-kube-api-access-zfb8w\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.533998 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a57b9495-9a8d-4ec8-8a4d-92220d911386-serving-cert\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534019 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmpd\" (UniqueName: \"kubernetes.io/projected/9c6f4ba4-aae8-4308-be38-b74b07116955-kube-api-access-znmpd\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534048 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7dd2\" (UniqueName: \"kubernetes.io/projected/edddb554-81cd-4f1f-ad25-21dc5d5a2c35-kube-api-access-l7dd2\") pod \"migrator-59844c95c7-ffckq\" (UID: \"edddb554-81cd-4f1f-ad25-21dc5d5a2c35\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534092 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f249707f-34f7-4964-9cd9-9c83df2f3056-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534112 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-config\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534128 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534143 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-default-certificate\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534169 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x67vk\" (UniqueName: \"kubernetes.io/projected/25865701-6601-400a-8cca-606a3cabcc5d-kube-api-access-x67vk\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534185 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2fx7\" (UniqueName: \"kubernetes.io/projected/f396efd2-0a8e-44bb-98c8-ad10c3383cef-kube-api-access-s2fx7\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534198 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-config\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534214 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvx9f\" (UniqueName: \"kubernetes.io/projected/373b7163-d058-419c-b4c5-b76a80f78dfa-kube-api-access-zvx9f\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534227 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-stats-auth\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534240 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltmt9\" (UniqueName: \"kubernetes.io/projected/74c2583d-61ac-4c6e-8cb5-11427314ecad-kube-api-access-ltmt9\") pod \"dns-operator-744455d44c-8mv9d\" (UID: \"74c2583d-61ac-4c6e-8cb5-11427314ecad\") " pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534268 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-config\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534283 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73787c8-407a-4e02-8c50-7205b96c76b8-config\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534295 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ed3437e-7360-4cc6-a4d5-b54d2f761945-images\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534322 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/373b7163-d058-419c-b4c5-b76a80f78dfa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534338 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.534350 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wznlb\" (UniqueName: \"kubernetes.io/projected/1ed3437e-7360-4cc6-a4d5-b54d2f761945-kube-api-access-wznlb\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.535394 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.536522 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.536691 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.538781 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c2pf5"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.548523 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.549527 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.549778 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.550712 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.551531 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hckp7"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.552610 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.554624 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.555375 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.556176 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.556729 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5zj2"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.557522 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.558300 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.559172 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cdnsn"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.559749 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.561948 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lqxzb"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.561975 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z76mp"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.563265 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ljsqd"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.563356 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.563361 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.564314 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z76mp"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.579408 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.588782 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qmhqk"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.589370 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.595082 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.597782 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qmhqk"] Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.615089 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.634679 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25865701-6601-400a-8cca-606a3cabcc5d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.634780 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f32e91-6759-4608-9f24-88ed1d5d769e-config\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.634850 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-oauth-config\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.634924 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-client-ca\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.634984 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635056 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgjb\" (UniqueName: \"kubernetes.io/projected/28a78e7d-ae79-4791-aa1f-6398f611c561-kube-api-access-2vgjb\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635117 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk2h8\" (UniqueName: \"kubernetes.io/projected/a8450c87-7b9b-47cf-86ce-145ef517f494-kube-api-access-dk2h8\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635180 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862e3345-8b2c-4009-b50c-0fd6025ac9dc-config\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635333 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-serving-cert\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635370 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st576\" (UniqueName: \"kubernetes.io/projected/cd4e2b49-bdc7-425a-877f-74938cd8a472-kube-api-access-st576\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635374 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25865701-6601-400a-8cca-606a3cabcc5d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635388 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-etcd-client\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635405 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d6df87c-65e5-4899-ad0a-22e9818da7d6-cert\") pod \"ingress-canary-hckp7\" (UID: \"3d6df87c-65e5-4899-ad0a-22e9818da7d6\") " pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635276 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635420 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635668 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd4e2b49-bdc7-425a-877f-74938cd8a472-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635690 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2b9e38-a7cf-43bb-aa89-861571046aee-serving-cert\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635707 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-encryption-config\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2lqn\" (UniqueName: \"kubernetes.io/projected/a73787c8-407a-4e02-8c50-7205b96c76b8-kube-api-access-k2lqn\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635749 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76f32e91-6759-4608-9f24-88ed1d5d769e-auth-proxy-config\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635768 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgzc9\" (UniqueName: \"kubernetes.io/projected/e5135d02-57f8-48f3-96d3-af0fb70e8ac3-kube-api-access-zgzc9\") pod \"downloads-7954f5f757-qkccb\" (UID: \"e5135d02-57f8-48f3-96d3-af0fb70e8ac3\") " pod="openshift-console/downloads-7954f5f757-qkccb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635785 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-etcd-client\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635799 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74c2583d-61ac-4c6e-8cb5-11427314ecad-metrics-tls\") pod \"dns-operator-744455d44c-8mv9d\" (UID: \"74c2583d-61ac-4c6e-8cb5-11427314ecad\") " pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635818 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpj5j\" (UniqueName: \"kubernetes.io/projected/76f32e91-6759-4608-9f24-88ed1d5d769e-kube-api-access-zpj5j\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635833 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-client-ca\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635848 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25f215c0-701b-4a75-9c19-6deeab862309-signing-cabundle\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635866 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/862e3345-8b2c-4009-b50c-0fd6025ac9dc-serving-cert\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635882 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ed3437e-7360-4cc6-a4d5-b54d2f761945-proxy-tls\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635904 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862e3345-8b2c-4009-b50c-0fd6025ac9dc-config\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635908 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-trusted-ca-bundle\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635951 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f249707f-34f7-4964-9cd9-9c83df2f3056-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635968 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-image-import-ca\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.635985 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-serving-cert\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636001 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a78e7d-ae79-4791-aa1f-6398f611c561-serving-cert\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636045 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f396efd2-0a8e-44bb-98c8-ad10c3383cef-node-pullsecrets\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636059 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-audit\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636072 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-srv-cert\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636086 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/373b7163-d058-419c-b4c5-b76a80f78dfa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636102 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882fc762-16ff-41a8-917d-e6b327a4adb5-secret-volume\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636116 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c6f4ba4-aae8-4308-be38-b74b07116955-service-ca-bundle\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636132 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1946d763-61f9-468c-84d1-15f635ae5aa8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636154 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4lm\" (UniqueName: \"kubernetes.io/projected/a57b9495-9a8d-4ec8-8a4d-92220d911386-kube-api-access-7j4lm\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636170 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrfr\" (UniqueName: \"kubernetes.io/projected/25f215c0-701b-4a75-9c19-6deeab862309-kube-api-access-rbrfr\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636186 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7526x\" (UniqueName: \"kubernetes.io/projected/882fc762-16ff-41a8-917d-e6b327a4adb5-kube-api-access-7526x\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636216 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2q2\" (UniqueName: \"kubernetes.io/projected/1946d763-61f9-468c-84d1-15f635ae5aa8-kube-api-access-5z2q2\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636230 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgfc\" (UniqueName: \"kubernetes.io/projected/7d2b9e38-a7cf-43bb-aa89-861571046aee-kube-api-access-njgfc\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636264 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd4e2b49-bdc7-425a-877f-74938cd8a472-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636278 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25865701-6601-400a-8cca-606a3cabcc5d-proxy-tls\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636300 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636320 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfb8w\" (UniqueName: \"kubernetes.io/projected/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-kube-api-access-zfb8w\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636337 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a57b9495-9a8d-4ec8-8a4d-92220d911386-serving-cert\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636350 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znmpd\" (UniqueName: \"kubernetes.io/projected/9c6f4ba4-aae8-4308-be38-b74b07116955-kube-api-access-znmpd\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636367 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7dd2\" (UniqueName: \"kubernetes.io/projected/edddb554-81cd-4f1f-ad25-21dc5d5a2c35-kube-api-access-l7dd2\") pod \"migrator-59844c95c7-ffckq\" (UID: \"edddb554-81cd-4f1f-ad25-21dc5d5a2c35\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f249707f-34f7-4964-9cd9-9c83df2f3056-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636396 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-config\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636409 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636422 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-default-certificate\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636437 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x67vk\" (UniqueName: \"kubernetes.io/projected/25865701-6601-400a-8cca-606a3cabcc5d-kube-api-access-x67vk\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636451 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2fx7\" (UniqueName: \"kubernetes.io/projected/f396efd2-0a8e-44bb-98c8-ad10c3383cef-kube-api-access-s2fx7\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636477 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-config\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636492 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvx9f\" (UniqueName: \"kubernetes.io/projected/373b7163-d058-419c-b4c5-b76a80f78dfa-kube-api-access-zvx9f\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636507 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-stats-auth\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636520 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltmt9\" (UniqueName: \"kubernetes.io/projected/74c2583d-61ac-4c6e-8cb5-11427314ecad-kube-api-access-ltmt9\") pod \"dns-operator-744455d44c-8mv9d\" (UID: \"74c2583d-61ac-4c6e-8cb5-11427314ecad\") " pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636534 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73787c8-407a-4e02-8c50-7205b96c76b8-config\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636547 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ed3437e-7360-4cc6-a4d5-b54d2f761945-images\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636561 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-config\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/373b7163-d058-419c-b4c5-b76a80f78dfa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636593 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636608 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wznlb\" (UniqueName: \"kubernetes.io/projected/1ed3437e-7360-4cc6-a4d5-b54d2f761945-kube-api-access-wznlb\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-service-ca\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636638 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636652 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-audit-policies\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76f32e91-6759-4608-9f24-88ed1d5d769e-machine-approver-tls\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-profile-collector-cert\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636694 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636697 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-serving-cert\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636740 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8450c87-7b9b-47cf-86ce-145ef517f494-serving-cert\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636759 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636777 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73787c8-407a-4e02-8c50-7205b96c76b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636794 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636810 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrshg\" (UniqueName: \"kubernetes.io/projected/2fcb132e-fadc-4c84-a103-2e821e006bfa-kube-api-access-mrshg\") pod \"cluster-samples-operator-665b6dd947-gmtff\" (UID: \"2fcb132e-fadc-4c84-a103-2e821e006bfa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636827 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-service-ca-bundle\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636843 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ed3437e-7360-4cc6-a4d5-b54d2f761945-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636859 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a73787c8-407a-4e02-8c50-7205b96c76b8-images\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636889 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-config\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636904 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f396efd2-0a8e-44bb-98c8-ad10c3383cef-audit-dir\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636918 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f249707f-34f7-4964-9cd9-9c83df2f3056-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636933 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-config\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636949 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/862e3345-8b2c-4009-b50c-0fd6025ac9dc-trusted-ca\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-audit-dir\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.636978 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx44d\" (UniqueName: \"kubernetes.io/projected/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-kube-api-access-rx44d\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637003 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmnt9\" (UniqueName: \"kubernetes.io/projected/c0e3f5d0-037c-48b9-888f-375c10e5f269-kube-api-access-hmnt9\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637032 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/373b7163-d058-419c-b4c5-b76a80f78dfa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637050 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-config\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637064 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fcb132e-fadc-4c84-a103-2e821e006bfa-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gmtff\" (UID: \"2fcb132e-fadc-4c84-a103-2e821e006bfa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637079 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzxtt\" (UniqueName: \"kubernetes.io/projected/dae5706a-d59e-40ba-9546-7bed3f4f77aa-kube-api-access-tzxtt\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637112 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-metrics-certs\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637128 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-oauth-serving-cert\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637142 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlxf\" (UniqueName: \"kubernetes.io/projected/3d6df87c-65e5-4899-ad0a-22e9818da7d6-kube-api-access-rrlxf\") pod \"ingress-canary-hckp7\" (UID: \"3d6df87c-65e5-4899-ad0a-22e9818da7d6\") " pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637157 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25f215c0-701b-4a75-9c19-6deeab862309-signing-key\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637172 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx7wz\" (UniqueName: \"kubernetes.io/projected/862e3345-8b2c-4009-b50c-0fd6025ac9dc-kube-api-access-xx7wz\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637187 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-encryption-config\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/28a78e7d-ae79-4791-aa1f-6398f611c561-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637217 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1946d763-61f9-468c-84d1-15f635ae5aa8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637231 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-trusted-ca-bundle\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637506 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-client-ca\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.637579 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd4e2b49-bdc7-425a-877f-74938cd8a472-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.638137 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-client-ca\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.638729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f249707f-34f7-4964-9cd9-9c83df2f3056-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.639517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-etcd-client\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.639728 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.639896 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-oauth-config\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.640330 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-audit-dir\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.641248 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.641775 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-config\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.642020 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73787c8-407a-4e02-8c50-7205b96c76b8-config\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.642599 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-config\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.642716 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f396efd2-0a8e-44bb-98c8-ad10c3383cef-node-pullsecrets\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.642905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-config\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.643049 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/373b7163-d058-419c-b4c5-b76a80f78dfa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.643073 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd4e2b49-bdc7-425a-877f-74938cd8a472-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.643832 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.643864 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76f32e91-6759-4608-9f24-88ed1d5d769e-auth-proxy-config\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.643990 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-serving-cert\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.644303 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-audit\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.644444 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f249707f-34f7-4964-9cd9-9c83df2f3056-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.644450 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-audit-policies\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.644547 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1946d763-61f9-468c-84d1-15f635ae5aa8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.644591 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-serving-cert\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.644615 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f396efd2-0a8e-44bb-98c8-ad10c3383cef-image-import-ca\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.644973 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-service-ca\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.645302 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.645542 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.645771 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2b9e38-a7cf-43bb-aa89-861571046aee-service-ca-bundle\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.646171 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a73787c8-407a-4e02-8c50-7205b96c76b8-images\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.646495 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73787c8-407a-4e02-8c50-7205b96c76b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.646550 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-config\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.646592 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f396efd2-0a8e-44bb-98c8-ad10c3383cef-audit-dir\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.646620 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1946d763-61f9-468c-84d1-15f635ae5aa8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.646809 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f32e91-6759-4608-9f24-88ed1d5d769e-config\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.646817 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ed3437e-7360-4cc6-a4d5-b54d2f761945-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.647184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-oauth-serving-cert\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.647214 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/28a78e7d-ae79-4791-aa1f-6398f611c561-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.647324 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25865701-6601-400a-8cca-606a3cabcc5d-proxy-tls\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.647483 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-config\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.647686 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fcb132e-fadc-4c84-a103-2e821e006bfa-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gmtff\" (UID: \"2fcb132e-fadc-4c84-a103-2e821e006bfa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.647847 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/862e3345-8b2c-4009-b50c-0fd6025ac9dc-trusted-ca\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.648104 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-encryption-config\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.648262 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/862e3345-8b2c-4009-b50c-0fd6025ac9dc-serving-cert\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.648263 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-serving-cert\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.648598 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2b9e38-a7cf-43bb-aa89-861571046aee-serving-cert\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.648627 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-etcd-client\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.648975 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a78e7d-ae79-4791-aa1f-6398f611c561-serving-cert\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.649208 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f396efd2-0a8e-44bb-98c8-ad10c3383cef-encryption-config\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.650184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76f32e91-6759-4608-9f24-88ed1d5d769e-machine-approver-tls\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.651755 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8450c87-7b9b-47cf-86ce-145ef517f494-serving-cert\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.651927 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a57b9495-9a8d-4ec8-8a4d-92220d911386-serving-cert\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.655654 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.675270 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.699719 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.716057 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.721586 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.735121 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.743422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-config\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.755129 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.775679 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.779298 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-profile-collector-cert\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.787253 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882fc762-16ff-41a8-917d-e6b327a4adb5-secret-volume\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.795358 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.815382 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.835483 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.844821 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-srv-cert\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.855261 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.875632 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.894995 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.915867 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.935597 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.954965 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.974978 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 23 06:46:38 crc kubenswrapper[4681]: I1123 06:46:38.995624 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.015784 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.034867 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.042016 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ed3437e-7360-4cc6-a4d5-b54d2f761945-images\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.055638 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.075115 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.080190 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ed3437e-7360-4cc6-a4d5-b54d2f761945-proxy-tls\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.094982 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.115667 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.134873 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.155704 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.175073 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.195821 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.215170 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.235705 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.250865 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.256002 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.275309 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.295780 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.315786 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.335000 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.355897 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.374890 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.394932 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.418694 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.435959 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.454200 4681 request.go:700] Waited for 1.002343756s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&limit=500&resourceVersion=0 Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.455016 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.475361 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.495222 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.514956 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.535096 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.555119 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.560441 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74c2583d-61ac-4c6e-8cb5-11427314ecad-metrics-tls\") pod \"dns-operator-744455d44c-8mv9d\" (UID: \"74c2583d-61ac-4c6e-8cb5-11427314ecad\") " pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.575411 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.595592 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.615805 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.634866 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.636627 4681 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.636682 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6df87c-65e5-4899-ad0a-22e9818da7d6-cert podName:3d6df87c-65e5-4899-ad0a-22e9818da7d6 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.136660635 +0000 UTC m=+137.206169872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d6df87c-65e5-4899-ad0a-22e9818da7d6-cert") pod "ingress-canary-hckp7" (UID: "3d6df87c-65e5-4899-ad0a-22e9818da7d6") : failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.637722 4681 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.637757 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25f215c0-701b-4a75-9c19-6deeab862309-signing-cabundle podName:25f215c0-701b-4a75-9c19-6deeab862309 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.137750044 +0000 UTC m=+137.207259282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/25f215c0-701b-4a75-9c19-6deeab862309-signing-cabundle") pod "service-ca-9c57cc56f-lqxzb" (UID: "25f215c0-701b-4a75-9c19-6deeab862309") : failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.640027 4681 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.640090 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca podName:dae5706a-d59e-40ba-9546-7bed3f4f77aa nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.140081733 +0000 UTC m=+137.209590971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca") pod "marketplace-operator-79b997595-g5zj2" (UID: "dae5706a-d59e-40ba-9546-7bed3f4f77aa") : failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.641447 4681 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.641494 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-metrics-certs podName:9c6f4ba4-aae8-4308-be38-b74b07116955 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.141487351 +0000 UTC m=+137.210996588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-metrics-certs") pod "router-default-5444994796-b7ms9" (UID: "9c6f4ba4-aae8-4308-be38-b74b07116955") : failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643047 4681 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643083 4681 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643091 4681 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643105 4681 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643085 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics podName:dae5706a-d59e-40ba-9546-7bed3f4f77aa nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.143074654 +0000 UTC m=+137.212583891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics") pod "marketplace-operator-79b997595-g5zj2" (UID: "dae5706a-d59e-40ba-9546-7bed3f4f77aa") : failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643186 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume podName:882fc762-16ff-41a8-917d-e6b327a4adb5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.143148884 +0000 UTC m=+137.212658121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume") pod "collect-profiles-29398005-5x47l" (UID: "882fc762-16ff-41a8-917d-e6b327a4adb5") : failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643212 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-stats-auth podName:9c6f4ba4-aae8-4308-be38-b74b07116955 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.143198608 +0000 UTC m=+137.212707846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-stats-auth") pod "router-default-5444994796-b7ms9" (UID: "9c6f4ba4-aae8-4308-be38-b74b07116955") : failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.643253 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-default-certificate podName:9c6f4ba4-aae8-4308-be38-b74b07116955 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.143246077 +0000 UTC m=+137.212755314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-default-certificate") pod "router-default-5444994796-b7ms9" (UID: "9c6f4ba4-aae8-4308-be38-b74b07116955") : failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.645093 4681 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.645130 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c6f4ba4-aae8-4308-be38-b74b07116955-service-ca-bundle podName:9c6f4ba4-aae8-4308-be38-b74b07116955 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.145122155 +0000 UTC m=+137.214631392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9c6f4ba4-aae8-4308-be38-b74b07116955-service-ca-bundle") pod "router-default-5444994796-b7ms9" (UID: "9c6f4ba4-aae8-4308-be38-b74b07116955") : failed to sync configmap cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.645521 4681 secret.go:188] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.645589 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/373b7163-d058-419c-b4c5-b76a80f78dfa-image-registry-operator-tls podName:373b7163-d058-419c-b4c5-b76a80f78dfa nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.145581384 +0000 UTC m=+137.215090621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/373b7163-d058-419c-b4c5-b76a80f78dfa-image-registry-operator-tls") pod "cluster-image-registry-operator-dc59b4c8b-bhz6x" (UID: "373b7163-d058-419c-b4c5-b76a80f78dfa") : failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.646834 4681 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: E1123 06:46:39.646882 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25f215c0-701b-4a75-9c19-6deeab862309-signing-key podName:25f215c0-701b-4a75-9c19-6deeab862309 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.146873277 +0000 UTC m=+137.216382514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/25f215c0-701b-4a75-9c19-6deeab862309-signing-key") pod "service-ca-9c57cc56f-lqxzb" (UID: "25f215c0-701b-4a75-9c19-6deeab862309") : failed to sync secret cache: timed out waiting for the condition Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.654949 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.675018 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.695870 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.714970 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.734742 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.774769 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.795820 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.814744 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.835198 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.855249 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.875721 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.899665 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.915645 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.935727 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.955521 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.975638 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 23 06:46:39 crc kubenswrapper[4681]: I1123 06:46:39.995216 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.015263 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.035395 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.055436 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.075431 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.095510 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.115530 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.135694 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.151858 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25f215c0-701b-4a75-9c19-6deeab862309-signing-cabundle\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.151901 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/373b7163-d058-419c-b4c5-b76a80f78dfa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.151920 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c6f4ba4-aae8-4308-be38-b74b07116955-service-ca-bundle\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.151959 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.151994 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-default-certificate\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152008 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152046 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-stats-auth\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152072 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-metrics-certs\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152151 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25f215c0-701b-4a75-9c19-6deeab862309-signing-key\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152185 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d6df87c-65e5-4899-ad0a-22e9818da7d6-cert\") pod \"ingress-canary-hckp7\" (UID: \"3d6df87c-65e5-4899-ad0a-22e9818da7d6\") " pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152655 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25f215c0-701b-4a75-9c19-6deeab862309-signing-cabundle\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152752 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c6f4ba4-aae8-4308-be38-b74b07116955-service-ca-bundle\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.152946 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.153077 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.154487 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/373b7163-d058-419c-b4c5-b76a80f78dfa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.154832 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.154983 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-metrics-certs\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.155116 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.155397 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-stats-auth\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.155552 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25f215c0-701b-4a75-9c19-6deeab862309-signing-key\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.156656 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c6f4ba4-aae8-4308-be38-b74b07116955-default-certificate\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.176185 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.195871 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.204807 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d6df87c-65e5-4899-ad0a-22e9818da7d6-cert\") pod \"ingress-canary-hckp7\" (UID: \"3d6df87c-65e5-4899-ad0a-22e9818da7d6\") " pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.215569 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.251527 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.251579 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.251527 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.254965 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.275237 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.295165 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.315166 4681 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.335131 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.355160 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.374720 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.396101 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.415191 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.446184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgjb\" (UniqueName: \"kubernetes.io/projected/28a78e7d-ae79-4791-aa1f-6398f611c561-kube-api-access-2vgjb\") pod \"openshift-config-operator-7777fb866f-42z7r\" (UID: \"28a78e7d-ae79-4791-aa1f-6398f611c561\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.465299 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk2h8\" (UniqueName: \"kubernetes.io/projected/a8450c87-7b9b-47cf-86ce-145ef517f494-kube-api-access-dk2h8\") pod \"route-controller-manager-6576b87f9c-9qp5r\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.474031 4681 request.go:700] Waited for 1.838539437s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.484825 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2c3c50b-3800-4f8f-9b24-3063381cfd5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gtltp\" (UID: \"d2c3c50b-3800-4f8f-9b24-3063381cfd5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.493950 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.505403 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st576\" (UniqueName: \"kubernetes.io/projected/cd4e2b49-bdc7-425a-877f-74938cd8a472-kube-api-access-st576\") pod \"openshift-apiserver-operator-796bbdcf4f-mj9j9\" (UID: \"cd4e2b49-bdc7-425a-877f-74938cd8a472\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.526567 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgzc9\" (UniqueName: \"kubernetes.io/projected/e5135d02-57f8-48f3-96d3-af0fb70e8ac3-kube-api-access-zgzc9\") pod \"downloads-7954f5f757-qkccb\" (UID: \"e5135d02-57f8-48f3-96d3-af0fb70e8ac3\") " pod="openshift-console/downloads-7954f5f757-qkccb" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.547725 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2lqn\" (UniqueName: \"kubernetes.io/projected/a73787c8-407a-4e02-8c50-7205b96c76b8-kube-api-access-k2lqn\") pod \"machine-api-operator-5694c8668f-cxwjl\" (UID: \"a73787c8-407a-4e02-8c50-7205b96c76b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.572652 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpj5j\" (UniqueName: \"kubernetes.io/projected/76f32e91-6759-4608-9f24-88ed1d5d769e-kube-api-access-zpj5j\") pod \"machine-approver-56656f9798-sqg25\" (UID: \"76f32e91-6759-4608-9f24-88ed1d5d769e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.574500 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" Nov 23 06:46:40 crc kubenswrapper[4681]: W1123 06:46:40.586275 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76f32e91_6759_4608_9f24_88ed1d5d769e.slice/crio-b3625b1b29fe638fdd20385b82765ae7d145c26be5f629a15f95f42f7ed64566 WatchSource:0}: Error finding container b3625b1b29fe638fdd20385b82765ae7d145c26be5f629a15f95f42f7ed64566: Status 404 returned error can't find the container with id b3625b1b29fe638fdd20385b82765ae7d145c26be5f629a15f95f42f7ed64566 Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.588437 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4lm\" (UniqueName: \"kubernetes.io/projected/a57b9495-9a8d-4ec8-8a4d-92220d911386-kube-api-access-7j4lm\") pod \"controller-manager-879f6c89f-nk54m\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.604126 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r"] Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.607719 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrfr\" (UniqueName: \"kubernetes.io/projected/25f215c0-701b-4a75-9c19-6deeab862309-kube-api-access-rbrfr\") pod \"service-ca-9c57cc56f-lqxzb\" (UID: \"25f215c0-701b-4a75-9c19-6deeab862309\") " pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.609271 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qkccb" Nov 23 06:46:40 crc kubenswrapper[4681]: W1123 06:46:40.609527 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8450c87_7b9b_47cf_86ce_145ef517f494.slice/crio-378948b61b664181e0b15d3b5c5a9ccc73f160ec5764246278e6c054e58757e4 WatchSource:0}: Error finding container 378948b61b664181e0b15d3b5c5a9ccc73f160ec5764246278e6c054e58757e4: Status 404 returned error can't find the container with id 378948b61b664181e0b15d3b5c5a9ccc73f160ec5764246278e6c054e58757e4 Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.618562 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.625975 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.626737 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7526x\" (UniqueName: \"kubernetes.io/projected/882fc762-16ff-41a8-917d-e6b327a4adb5-kube-api-access-7526x\") pod \"collect-profiles-29398005-5x47l\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.636126 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.648622 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2q2\" (UniqueName: \"kubernetes.io/projected/1946d763-61f9-468c-84d1-15f635ae5aa8-kube-api-access-5z2q2\") pod \"openshift-controller-manager-operator-756b6f6bc6-72qnq\" (UID: \"1946d763-61f9-468c-84d1-15f635ae5aa8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.664246 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.671296 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgfc\" (UniqueName: \"kubernetes.io/projected/7d2b9e38-a7cf-43bb-aa89-861571046aee-kube-api-access-njgfc\") pod \"authentication-operator-69f744f599-pmxqk\" (UID: \"7d2b9e38-a7cf-43bb-aa89-861571046aee\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.685777 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.690908 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfb8w\" (UniqueName: \"kubernetes.io/projected/72c9ca30-e13b-48dd-9c5d-05e6dd4a3368-kube-api-access-zfb8w\") pod \"apiserver-7bbb656c7d-rxxxv\" (UID: \"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.716351 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltmt9\" (UniqueName: \"kubernetes.io/projected/74c2583d-61ac-4c6e-8cb5-11427314ecad-kube-api-access-ltmt9\") pod \"dns-operator-744455d44c-8mv9d\" (UID: \"74c2583d-61ac-4c6e-8cb5-11427314ecad\") " pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.732141 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znmpd\" (UniqueName: \"kubernetes.io/projected/9c6f4ba4-aae8-4308-be38-b74b07116955-kube-api-access-znmpd\") pod \"router-default-5444994796-b7ms9\" (UID: \"9c6f4ba4-aae8-4308-be38-b74b07116955\") " pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.736663 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" event={"ID":"a8450c87-7b9b-47cf-86ce-145ef517f494","Type":"ContainerStarted","Data":"378948b61b664181e0b15d3b5c5a9ccc73f160ec5764246278e6c054e58757e4"} Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.739016 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" event={"ID":"76f32e91-6759-4608-9f24-88ed1d5d769e","Type":"ContainerStarted","Data":"b3625b1b29fe638fdd20385b82765ae7d145c26be5f629a15f95f42f7ed64566"} Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.756434 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7dd2\" (UniqueName: \"kubernetes.io/projected/edddb554-81cd-4f1f-ad25-21dc5d5a2c35-kube-api-access-l7dd2\") pod \"migrator-59844c95c7-ffckq\" (UID: \"edddb554-81cd-4f1f-ad25-21dc5d5a2c35\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.768875 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wznlb\" (UniqueName: \"kubernetes.io/projected/1ed3437e-7360-4cc6-a4d5-b54d2f761945-kube-api-access-wznlb\") pod \"machine-config-operator-74547568cd-gk8jd\" (UID: \"1ed3437e-7360-4cc6-a4d5-b54d2f761945\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.778776 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qkccb"] Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.781375 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.789740 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.795058 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.798993 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx44d\" (UniqueName: \"kubernetes.io/projected/c09a8f6b-1519-4cc8-a1e5-ef0261619f3e-kube-api-access-rx44d\") pod \"catalog-operator-68c6474976-hkqhz\" (UID: \"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.813425 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmnt9\" (UniqueName: \"kubernetes.io/projected/c0e3f5d0-037c-48b9-888f-375c10e5f269-kube-api-access-hmnt9\") pod \"console-f9d7485db-59rqt\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.822539 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.827931 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.839232 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.839934 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/373b7163-d058-419c-b4c5-b76a80f78dfa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.840809 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.854373 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42z7r"] Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.860376 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.861429 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x67vk\" (UniqueName: \"kubernetes.io/projected/25865701-6601-400a-8cca-606a3cabcc5d-kube-api-access-x67vk\") pod \"machine-config-controller-84d6567774-bfkn6\" (UID: \"25865701-6601-400a-8cca-606a3cabcc5d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.879870 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzxtt\" (UniqueName: \"kubernetes.io/projected/dae5706a-d59e-40ba-9546-7bed3f4f77aa-kube-api-access-tzxtt\") pod \"marketplace-operator-79b997595-g5zj2\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.891957 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2fx7\" (UniqueName: \"kubernetes.io/projected/f396efd2-0a8e-44bb-98c8-ad10c3383cef-kube-api-access-s2fx7\") pod \"apiserver-76f77b778f-d7f7c\" (UID: \"f396efd2-0a8e-44bb-98c8-ad10c3383cef\") " pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.909908 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvx9f\" (UniqueName: \"kubernetes.io/projected/373b7163-d058-419c-b4c5-b76a80f78dfa-kube-api-access-zvx9f\") pod \"cluster-image-registry-operator-dc59b4c8b-bhz6x\" (UID: \"373b7163-d058-419c-b4c5-b76a80f78dfa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.926305 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nk54m"] Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.931013 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrshg\" (UniqueName: \"kubernetes.io/projected/2fcb132e-fadc-4c84-a103-2e821e006bfa-kube-api-access-mrshg\") pod \"cluster-samples-operator-665b6dd947-gmtff\" (UID: \"2fcb132e-fadc-4c84-a103-2e821e006bfa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.946094 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.950175 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f249707f-34f7-4964-9cd9-9c83df2f3056-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b2dpx\" (UID: \"f249707f-34f7-4964-9cd9-9c83df2f3056\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.967405 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp"] Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.970007 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.978159 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlxf\" (UniqueName: \"kubernetes.io/projected/3d6df87c-65e5-4899-ad0a-22e9818da7d6-kube-api-access-rrlxf\") pod \"ingress-canary-hckp7\" (UID: \"3d6df87c-65e5-4899-ad0a-22e9818da7d6\") " pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.987906 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9"] Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.990415 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx7wz\" (UniqueName: \"kubernetes.io/projected/862e3345-8b2c-4009-b50c-0fd6025ac9dc-kube-api-access-xx7wz\") pod \"console-operator-58897d9998-nth4c\" (UID: \"862e3345-8b2c-4009-b50c-0fd6025ac9dc\") " pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.994114 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:40 crc kubenswrapper[4681]: I1123 06:46:40.995916 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.013375 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.017218 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 23 06:46:41 crc kubenswrapper[4681]: W1123 06:46:41.046732 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd4e2b49_bdc7_425a_877f_74938cd8a472.slice/crio-ca5be2a7815e45626a3dbf8e2b67e5bde64d5197c867d6ebe79ea698784a3bc5 WatchSource:0}: Error finding container ca5be2a7815e45626a3dbf8e2b67e5bde64d5197c867d6ebe79ea698784a3bc5: Status 404 returned error can't find the container with id ca5be2a7815e45626a3dbf8e2b67e5bde64d5197c867d6ebe79ea698784a3bc5 Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.063452 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-config\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.063725 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.064086 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.564076379 +0000 UTC m=+138.633585606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064259 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-client\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064284 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74g84\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-kube-api-access-74g84\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064303 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064355 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86d56\" (UniqueName: \"kubernetes.io/projected/4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6-kube-api-access-86d56\") pod \"control-plane-machine-set-operator-78cbb6b69f-c26v4\" (UID: \"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064376 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-bound-sa-token\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064401 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/77f5ceda-2966-443e-a939-dd7408e66bdc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064423 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064447 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e4885ac-8d00-41be-9ccb-34386e8be5f9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064483 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6c8t\" (UniqueName: \"kubernetes.io/projected/ee2298af-3eaf-4b52-9783-e7887fe452f4-kube-api-access-l6c8t\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064497 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c26v4\" (UID: \"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064519 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4hc\" (UniqueName: \"kubernetes.io/projected/4e4885ac-8d00-41be-9ccb-34386e8be5f9-kube-api-access-nn4hc\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064555 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e4885ac-8d00-41be-9ccb-34386e8be5f9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064578 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/77f5ceda-2966-443e-a939-dd7408e66bdc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064590 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee2298af-3eaf-4b52-9783-e7887fe452f4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064603 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-ca\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064625 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e4885ac-8d00-41be-9ccb-34386e8be5f9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064640 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47nqt\" (UniqueName: \"kubernetes.io/projected/420fa719-fac4-4ed4-ab06-f72adbdcf568-kube-api-access-47nqt\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064680 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqj99\" (UniqueName: \"kubernetes.io/projected/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-kube-api-access-tqj99\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064701 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee2298af-3eaf-4b52-9783-e7887fe452f4-srv-cert\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064714 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064729 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420fa719-fac4-4ed4-ab06-f72adbdcf568-serving-cert\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064743 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-tls\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064765 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-certificates\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064802 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-config\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064837 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064858 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-service-ca\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.064873 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-trusted-ca\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.077421 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.087894 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.095750 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.106360 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.106911 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.110543 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.115366 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cxwjl"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.118308 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.133652 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lqxzb"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.135165 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.145164 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hckp7" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.167635 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.167746 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.667730498 +0000 UTC m=+138.737239735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.167901 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-audit-policies\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.167955 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e4885ac-8d00-41be-9ccb-34386e8be5f9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.167979 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bj7t\" (UniqueName: \"kubernetes.io/projected/47385347-ea0a-46ba-9c22-878470316668-kube-api-access-8bj7t\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.167993 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6c8t\" (UniqueName: \"kubernetes.io/projected/ee2298af-3eaf-4b52-9783-e7887fe452f4-kube-api-access-l6c8t\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168010 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c26v4\" (UID: \"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168040 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn4hc\" (UniqueName: \"kubernetes.io/projected/4e4885ac-8d00-41be-9ccb-34386e8be5f9-kube-api-access-nn4hc\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168096 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168113 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168139 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e4885ac-8d00-41be-9ccb-34386e8be5f9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168155 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/77f5ceda-2966-443e-a939-dd7408e66bdc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168168 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee2298af-3eaf-4b52-9783-e7887fe452f4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168191 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-ca\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168204 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01287236-92c0-4946-918f-bd641d4d5435-audit-dir\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168232 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e4885ac-8d00-41be-9ccb-34386e8be5f9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168248 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e151a32d-c873-40de-8d35-0fa38739718e-tmpfs\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168278 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47nqt\" (UniqueName: \"kubernetes.io/projected/420fa719-fac4-4ed4-ab06-f72adbdcf568-kube-api-access-47nqt\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168292 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168311 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e151a32d-c873-40de-8d35-0fa38739718e-apiservice-cert\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168326 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168343 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168356 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-registration-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168374 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168412 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqj99\" (UniqueName: \"kubernetes.io/projected/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-kube-api-access-tqj99\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168433 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee2298af-3eaf-4b52-9783-e7887fe452f4-srv-cert\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420fa719-fac4-4ed4-ab06-f72adbdcf568-serving-cert\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168479 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168496 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168516 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-tls\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168530 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-plugins-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168581 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-certificates\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168595 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlqg\" (UniqueName: \"kubernetes.io/projected/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-kube-api-access-8qlqg\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168638 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-config\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168651 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khwj7\" (UniqueName: \"kubernetes.io/projected/86289470-a077-471b-b98a-aa1f8eff9f84-kube-api-access-khwj7\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168665 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168713 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfrq\" (UniqueName: \"kubernetes.io/projected/05ad0d6e-3a38-4afe-b144-2a3550c21799-kube-api-access-4dfrq\") pod \"multus-admission-controller-857f4d67dd-ljsqd\" (UID: \"05ad0d6e-3a38-4afe-b144-2a3550c21799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168737 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-service-ca\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168751 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e151a32d-c873-40de-8d35-0fa38739718e-webhook-cert\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168766 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168781 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47385347-ea0a-46ba-9c22-878470316668-config\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168794 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-csi-data-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168808 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-trusted-ca\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168821 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86289470-a077-471b-b98a-aa1f8eff9f84-metrics-tls\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168837 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-config\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168855 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168870 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvg62\" (UniqueName: \"kubernetes.io/projected/01287236-92c0-4946-918f-bd641d4d5435-kube-api-access-kvg62\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168901 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-client\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168915 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47385347-ea0a-46ba-9c22-878470316668-serving-cert\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168929 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74g84\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-kube-api-access-74g84\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168944 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86289470-a077-471b-b98a-aa1f8eff9f84-config-volume\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168960 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168974 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9nq\" (UniqueName: \"kubernetes.io/projected/4cf9844f-125e-40f1-a45c-784ea466a236-kube-api-access-gk9nq\") pod \"package-server-manager-789f6589d5-cn5t4\" (UID: \"4cf9844f-125e-40f1-a45c-784ea466a236\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.168998 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/05ad0d6e-3a38-4afe-b144-2a3550c21799-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ljsqd\" (UID: \"05ad0d6e-3a38-4afe-b144-2a3550c21799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169011 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-socket-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169058 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2cr\" (UniqueName: \"kubernetes.io/projected/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-kube-api-access-nv2cr\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169106 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9nq8\" (UniqueName: \"kubernetes.io/projected/e151a32d-c873-40de-8d35-0fa38739718e-kube-api-access-h9nq8\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169123 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86d56\" (UniqueName: \"kubernetes.io/projected/4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6-kube-api-access-86d56\") pod \"control-plane-machine-set-operator-78cbb6b69f-c26v4\" (UID: \"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169147 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-bound-sa-token\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169176 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-node-bootstrap-token\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169189 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-certs\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169221 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169253 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/77f5ceda-2966-443e-a939-dd7408e66bdc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169268 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cf9844f-125e-40f1-a45c-784ea466a236-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cn5t4\" (UID: \"4cf9844f-125e-40f1-a45c-784ea466a236\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169282 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-mountpoint-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169304 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.169352 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.172489 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e4885ac-8d00-41be-9ccb-34386e8be5f9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.176098 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.179587 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/77f5ceda-2966-443e-a939-dd7408e66bdc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.180927 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-client\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.183550 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-config\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.183672 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.683661444 +0000 UTC m=+138.753170681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.183736 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-trusted-ca\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.184775 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-certificates\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.185911 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-service-ca\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.186026 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c26v4\" (UID: \"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.186287 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-config\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.186804 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/420fa719-fac4-4ed4-ab06-f72adbdcf568-etcd-ca\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.187163 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee2298af-3eaf-4b52-9783-e7887fe452f4-srv-cert\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.188176 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/77f5ceda-2966-443e-a939-dd7408e66bdc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.190319 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.199325 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-tls\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.199522 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.201076 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.203422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e4885ac-8d00-41be-9ccb-34386e8be5f9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.203597 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420fa719-fac4-4ed4-ab06-f72adbdcf568-serving-cert\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.206688 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee2298af-3eaf-4b52-9783-e7887fe452f4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.211230 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqj99\" (UniqueName: \"kubernetes.io/projected/6f4ea567-ba40-47b7-970f-fbcd8b9e44b6-kube-api-access-tqj99\") pod \"kube-storage-version-migrator-operator-b67b599dd-fdgfd\" (UID: \"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.233437 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6c8t\" (UniqueName: \"kubernetes.io/projected/ee2298af-3eaf-4b52-9783-e7887fe452f4-kube-api-access-l6c8t\") pod \"olm-operator-6b444d44fb-hsxts\" (UID: \"ee2298af-3eaf-4b52-9783-e7887fe452f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.257487 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.265092 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn4hc\" (UniqueName: \"kubernetes.io/projected/4e4885ac-8d00-41be-9ccb-34386e8be5f9-kube-api-access-nn4hc\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.269988 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.270123 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.770104026 +0000 UTC m=+138.839613263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270204 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e151a32d-c873-40de-8d35-0fa38739718e-webhook-cert\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270227 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270245 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47385347-ea0a-46ba-9c22-878470316668-config\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270278 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-csi-data-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270293 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86289470-a077-471b-b98a-aa1f8eff9f84-metrics-tls\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270615 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270663 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvg62\" (UniqueName: \"kubernetes.io/projected/01287236-92c0-4946-918f-bd641d4d5435-kube-api-access-kvg62\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270681 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47385347-ea0a-46ba-9c22-878470316668-serving-cert\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.270703 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9nq\" (UniqueName: \"kubernetes.io/projected/4cf9844f-125e-40f1-a45c-784ea466a236-kube-api-access-gk9nq\") pod \"package-server-manager-789f6589d5-cn5t4\" (UID: \"4cf9844f-125e-40f1-a45c-784ea466a236\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271277 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86289470-a077-471b-b98a-aa1f8eff9f84-config-volume\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271314 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/05ad0d6e-3a38-4afe-b144-2a3550c21799-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ljsqd\" (UID: \"05ad0d6e-3a38-4afe-b144-2a3550c21799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-socket-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271378 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv2cr\" (UniqueName: \"kubernetes.io/projected/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-kube-api-access-nv2cr\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271422 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9nq8\" (UniqueName: \"kubernetes.io/projected/e151a32d-c873-40de-8d35-0fa38739718e-kube-api-access-h9nq8\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271451 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-node-bootstrap-token\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271531 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-certs\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271570 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271606 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cf9844f-125e-40f1-a45c-784ea466a236-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cn5t4\" (UID: \"4cf9844f-125e-40f1-a45c-784ea466a236\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271620 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-mountpoint-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271689 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271709 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-audit-policies\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271751 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bj7t\" (UniqueName: \"kubernetes.io/projected/47385347-ea0a-46ba-9c22-878470316668-kube-api-access-8bj7t\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271781 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271820 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271844 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01287236-92c0-4946-918f-bd641d4d5435-audit-dir\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271860 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e151a32d-c873-40de-8d35-0fa38739718e-tmpfs\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271911 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271927 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271940 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e151a32d-c873-40de-8d35-0fa38739718e-apiservice-cert\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271975 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-registration-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.271992 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.272008 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.272063 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.272080 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-plugins-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.272098 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qlqg\" (UniqueName: \"kubernetes.io/projected/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-kube-api-access-8qlqg\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.272115 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khwj7\" (UniqueName: \"kubernetes.io/projected/86289470-a077-471b-b98a-aa1f8eff9f84-kube-api-access-khwj7\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.272149 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.272165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfrq\" (UniqueName: \"kubernetes.io/projected/05ad0d6e-3a38-4afe-b144-2a3550c21799-kube-api-access-4dfrq\") pod \"multus-admission-controller-857f4d67dd-ljsqd\" (UID: \"05ad0d6e-3a38-4afe-b144-2a3550c21799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.274449 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.279702 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01287236-92c0-4946-918f-bd641d4d5435-audit-dir\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.281013 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e151a32d-c873-40de-8d35-0fa38739718e-tmpfs\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.281088 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-mountpoint-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.281489 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-audit-policies\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.281935 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e151a32d-c873-40de-8d35-0fa38739718e-webhook-cert\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.282152 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.282710 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-plugins-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.283287 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86289470-a077-471b-b98a-aa1f8eff9f84-config-volume\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.283431 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.783312693 +0000 UTC m=+138.852821930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.283627 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-csi-data-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.285488 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-registration-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.286024 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.290634 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47385347-ea0a-46ba-9c22-878470316668-config\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.294899 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-socket-dir\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.295779 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47385347-ea0a-46ba-9c22-878470316668-serving-cert\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.296941 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.296968 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8mv9d"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.297359 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/05ad0d6e-3a38-4afe-b144-2a3550c21799-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ljsqd\" (UID: \"05ad0d6e-3a38-4afe-b144-2a3550c21799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.306238 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-node-bootstrap-token\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.310553 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.310884 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-certs\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.314649 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47nqt\" (UniqueName: \"kubernetes.io/projected/420fa719-fac4-4ed4-ab06-f72adbdcf568-kube-api-access-47nqt\") pod \"etcd-operator-b45778765-j7swg\" (UID: \"420fa719-fac4-4ed4-ab06-f72adbdcf568\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.343643 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e151a32d-c873-40de-8d35-0fa38739718e-apiservice-cert\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.344447 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74g84\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-kube-api-access-74g84\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.344755 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86289470-a077-471b-b98a-aa1f8eff9f84-metrics-tls\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.348413 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.348452 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.348664 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.348809 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.348852 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-59rqt"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.349288 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-bound-sa-token\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.349378 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.349819 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.351405 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cf9844f-125e-40f1-a45c-784ea466a236-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cn5t4\" (UID: \"4cf9844f-125e-40f1-a45c-784ea466a236\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.365159 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.371290 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86d56\" (UniqueName: \"kubernetes.io/projected/4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6-kube-api-access-86d56\") pod \"control-plane-machine-set-operator-78cbb6b69f-c26v4\" (UID: \"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.371613 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.373274 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.373834 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pmxqk"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.374099 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.374548 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.874529785 +0000 UTC m=+138.944039022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.378109 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.378556 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.878449005 +0000 UTC m=+138.947958243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.379430 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dl2f8\" (UID: \"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.383014 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e4885ac-8d00-41be-9ccb-34386e8be5f9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5kgmj\" (UID: \"4e4885ac-8d00-41be-9ccb-34386e8be5f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.384029 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.410399 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfrq\" (UniqueName: \"kubernetes.io/projected/05ad0d6e-3a38-4afe-b144-2a3550c21799-kube-api-access-4dfrq\") pod \"multus-admission-controller-857f4d67dd-ljsqd\" (UID: \"05ad0d6e-3a38-4afe-b144-2a3550c21799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.414908 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.416476 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.421364 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.425127 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.429748 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bj7t\" (UniqueName: \"kubernetes.io/projected/47385347-ea0a-46ba-9c22-878470316668-kube-api-access-8bj7t\") pod \"service-ca-operator-777779d784-7jdfn\" (UID: \"47385347-ea0a-46ba-9c22-878470316668\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.451550 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9nq\" (UniqueName: \"kubernetes.io/projected/4cf9844f-125e-40f1-a45c-784ea466a236-kube-api-access-gk9nq\") pod \"package-server-manager-789f6589d5-cn5t4\" (UID: \"4cf9844f-125e-40f1-a45c-784ea466a236\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.467667 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.474207 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qlqg\" (UniqueName: \"kubernetes.io/projected/b44f298a-45ed-4a54-b2f9-155e2fcf1f2a-kube-api-access-8qlqg\") pod \"machine-config-server-cdnsn\" (UID: \"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a\") " pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.480673 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.480983 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.980968259 +0000 UTC m=+139.050477496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.481056 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.481604 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:41.981596256 +0000 UTC m=+139.051105493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.491249 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khwj7\" (UniqueName: \"kubernetes.io/projected/86289470-a077-471b-b98a-aa1f8eff9f84-kube-api-access-khwj7\") pod \"dns-default-qmhqk\" (UID: \"86289470-a077-471b-b98a-aa1f8eff9f84\") " pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.507536 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv2cr\" (UniqueName: \"kubernetes.io/projected/00c8c8b9-3dab-4fde-8fa7-290140cfd81f-kube-api-access-nv2cr\") pod \"csi-hostpathplugin-z76mp\" (UID: \"00c8c8b9-3dab-4fde-8fa7-290140cfd81f\") " pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.536868 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9nq8\" (UniqueName: \"kubernetes.io/projected/e151a32d-c873-40de-8d35-0fa38739718e-kube-api-access-h9nq8\") pod \"packageserver-d55dfcdfc-z5fk5\" (UID: \"e151a32d-c873-40de-8d35-0fa38739718e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.565200 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvg62\" (UniqueName: \"kubernetes.io/projected/01287236-92c0-4946-918f-bd641d4d5435-kube-api-access-kvg62\") pod \"oauth-openshift-558db77b4-cq2gd\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.586837 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd"] Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.587382 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.588041 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.088027237 +0000 UTC m=+139.157536474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.599226 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.604642 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.615109 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.620942 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.632516 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.632600 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.653039 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.703858 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.704076 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.204066627 +0000 UTC m=+139.273575864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.750585 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cdnsn" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.755362 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" event={"ID":"a73787c8-407a-4e02-8c50-7205b96c76b8","Type":"ContainerStarted","Data":"b02d80e47540177a732ebfb02e86c0e7cc0590a5838fd6550e7737f509c27187"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.761398 4681 generic.go:334] "Generic (PLEG): container finished" podID="28a78e7d-ae79-4791-aa1f-6398f611c561" containerID="60f9e6dcdc8518ad1afeaf2ccdeaa114209c69fae8259c244ddfc76ce4f2520b" exitCode=0 Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.761442 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" event={"ID":"28a78e7d-ae79-4791-aa1f-6398f611c561","Type":"ContainerDied","Data":"60f9e6dcdc8518ad1afeaf2ccdeaa114209c69fae8259c244ddfc76ce4f2520b"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.761478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" event={"ID":"28a78e7d-ae79-4791-aa1f-6398f611c561","Type":"ContainerStarted","Data":"2886d92c03ee28f76c49bf06be029ba843bc1d638226fddd679728afc8a3e495"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.763774 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" event={"ID":"d2c3c50b-3800-4f8f-9b24-3063381cfd5e","Type":"ContainerStarted","Data":"ef1038d4147024fa0335ae1b60e4e58c064e473dbc30a776427107309b0bad2b"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.763805 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" event={"ID":"d2c3c50b-3800-4f8f-9b24-3063381cfd5e","Type":"ContainerStarted","Data":"290066bcb612092f7d454f2c29c3708410fd2928af977c6129afee3d3507fbd8"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.767154 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.768994 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" event={"ID":"a57b9495-9a8d-4ec8-8a4d-92220d911386","Type":"ContainerStarted","Data":"0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.769030 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" event={"ID":"a57b9495-9a8d-4ec8-8a4d-92220d911386","Type":"ContainerStarted","Data":"4252dcd9d2157e65c6dd9a018da5f36eaff0bf3be2f6724b3bb865e8eebe787e"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.769620 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.772197 4681 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nk54m container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.772448 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" podUID="a57b9495-9a8d-4ec8-8a4d-92220d911386" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.777360 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b7ms9" event={"ID":"9c6f4ba4-aae8-4308-be38-b74b07116955","Type":"ContainerStarted","Data":"00518a324a11b0f5952c9aee4c0fe74c53ae767f1caa4155714728a59147163d"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.777388 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b7ms9" event={"ID":"9c6f4ba4-aae8-4308-be38-b74b07116955","Type":"ContainerStarted","Data":"5ac9220ac7c5505c9b1df255ac1a5ca1f306b39598785302ccb38d5c65760d19"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.779163 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:41 crc kubenswrapper[4681]: W1123 06:46:41.783578 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ed3437e_7360_4cc6_a4d5_b54d2f761945.slice/crio-fc5ee57a86981fa5d5158f8b79f28a4bd93b98fd7a34051f73e8deaf393d3c16 WatchSource:0}: Error finding container fc5ee57a86981fa5d5158f8b79f28a4bd93b98fd7a34051f73e8deaf393d3c16: Status 404 returned error can't find the container with id fc5ee57a86981fa5d5158f8b79f28a4bd93b98fd7a34051f73e8deaf393d3c16 Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.783923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" event={"ID":"cd4e2b49-bdc7-425a-877f-74938cd8a472","Type":"ContainerStarted","Data":"ed18e95b8aad3bc5e913f7c1360885c09ce11153ecfd8083cfcc9906ad105866"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.783946 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" event={"ID":"cd4e2b49-bdc7-425a-877f-74938cd8a472","Type":"ContainerStarted","Data":"ca5be2a7815e45626a3dbf8e2b67e5bde64d5197c867d6ebe79ea698784a3bc5"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.789252 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qkccb" event={"ID":"e5135d02-57f8-48f3-96d3-af0fb70e8ac3","Type":"ContainerStarted","Data":"fa67c58b818845b23d359d8f8dab42e7f088cab22466b9d7bb23a6a3aa62b306"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.789290 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qkccb" event={"ID":"e5135d02-57f8-48f3-96d3-af0fb70e8ac3","Type":"ContainerStarted","Data":"8027b3440bd7500c2312285ffbd2e1f470eeca3e88c342d83af3159fd5d40fce"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.789857 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qkccb" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.790436 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" event={"ID":"7d2b9e38-a7cf-43bb-aa89-861571046aee","Type":"ContainerStarted","Data":"629a9edb1dd175ab7d1078634d827d45541a4f18070deef592d70b45c5e31db5"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.791916 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" event={"ID":"a8450c87-7b9b-47cf-86ce-145ef517f494","Type":"ContainerStarted","Data":"8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.792504 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.798672 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" event={"ID":"74c2583d-61ac-4c6e-8cb5-11427314ecad","Type":"ContainerStarted","Data":"66f64f9db4227e6d982ad61ab0eb2a028613380772cbec9a415511ad74749cd4"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.800500 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" event={"ID":"edddb554-81cd-4f1f-ad25-21dc5d5a2c35","Type":"ContainerStarted","Data":"e58d8aa8defd9e2e872c2fae050f64a4a379e4fff8b50124963a8ebcc8013db3"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.802729 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" event={"ID":"882fc762-16ff-41a8-917d-e6b327a4adb5","Type":"ContainerStarted","Data":"b8fd65b704074169da9a883c74f03bdd7bff197321f0b0ca9c2dbc60aef26ea9"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.803696 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" event={"ID":"76f32e91-6759-4608-9f24-88ed1d5d769e","Type":"ContainerStarted","Data":"f3c307116f15682660e8c5a27c6406586c724045126cac6c05727822728ea2d8"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.803734 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" event={"ID":"76f32e91-6759-4608-9f24-88ed1d5d769e","Type":"ContainerStarted","Data":"fb9e832bbaeb3c719ce7d235b0560738cdf269a3abb2364bff5460152e4dd8d8"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.804347 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.805417 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.305404606 +0000 UTC m=+139.374913843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.807836 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" event={"ID":"25f215c0-701b-4a75-9c19-6deeab862309","Type":"ContainerStarted","Data":"5ced4ccb37af57185740cf7ac1a5a07b16786b1dde18d066e7c34036fc42061e"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.808942 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" event={"ID":"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e","Type":"ContainerStarted","Data":"d83c3d92f0ce211efea5019049c985250ea2a1a30ebf9dd9debb18a89b0c2963"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.812107 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-59rqt" event={"ID":"c0e3f5d0-037c-48b9-888f-375c10e5f269","Type":"ContainerStarted","Data":"c0f6b797d3bc8af3b8450d5bedb0e98b8f33289dd08b3450a8eb4e293c5117c7"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.816135 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" event={"ID":"25865701-6601-400a-8cca-606a3cabcc5d","Type":"ContainerStarted","Data":"38924f63878472df1c4369819b00082310f413615a064c7606879641cfa4662c"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.816563 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-qkccb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.816608 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qkccb" podUID="e5135d02-57f8-48f3-96d3-af0fb70e8ac3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.817443 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" event={"ID":"1946d763-61f9-468c-84d1-15f635ae5aa8","Type":"ContainerStarted","Data":"074a7e947054725cf28a7e625708bd39917b089acb44700ed9a9e0492ff6a4c0"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.818068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" event={"ID":"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368","Type":"ContainerStarted","Data":"6c7a78ca2b1776d188baad8813cf23b6689b801f700da937b84d2c0f68259d60"} Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.824745 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.824880 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.824901 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.859569 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:41 crc kubenswrapper[4681]: I1123 06:46:41.908333 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:41 crc kubenswrapper[4681]: E1123 06:46:41.908928 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.408903742 +0000 UTC m=+139.478412979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.001766 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.009198 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.009294 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.509280304 +0000 UTC m=+139.578789541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.009594 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.009836 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.509829192 +0000 UTC m=+139.579338429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.047831 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7f7c"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.049067 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.068016 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.070615 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hckp7"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.118702 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.119085 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.619072069 +0000 UTC m=+139.688581306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.177728 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5zj2"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.220000 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.220313 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.720298738 +0000 UTC m=+139.789807976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.325358 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.326050 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.826033262 +0000 UTC m=+139.895542498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.327579 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.356096 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.356335 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nth4c"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.415413 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.432207 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.432564 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:42.932553751 +0000 UTC m=+140.002062988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.532629 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.532842 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.032826055 +0000 UTC m=+140.102335293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.533000 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.533284 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.033275956 +0000 UTC m=+140.102785193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.639671 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.640154 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.140142 +0000 UTC m=+140.209651236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.739577 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.745368 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.745623 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.245612243 +0000 UTC m=+140.315121480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.769130 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.797145 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.844329 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:42 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:42 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:42 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.844364 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.868355 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" podStartSLOduration=114.86833991 podStartE2EDuration="1m54.86833991s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:42.803391393 +0000 UTC m=+139.872900630" watchObservedRunningTime="2025-11-23 06:46:42.86833991 +0000 UTC m=+139.937849197" Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.873037 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtltp" podStartSLOduration=114.87302516 podStartE2EDuration="1m54.87302516s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:42.865630838 +0000 UTC m=+139.935140074" watchObservedRunningTime="2025-11-23 06:46:42.87302516 +0000 UTC m=+139.942534398" Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.873952 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8"] Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.876190 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" event={"ID":"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6","Type":"ContainerStarted","Data":"3605efa35af353ce4643e214c3160d1b38bf1433aad68458ce55394305002e5e"} Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.878442 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.878982 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.378968328 +0000 UTC m=+140.448477565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.905116 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" event={"ID":"f249707f-34f7-4964-9cd9-9c83df2f3056","Type":"ContainerStarted","Data":"f85069b6967c8f57b75fc407ace104dd5fefdc448dc68e0dd01e47789aff3f4c"} Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.919689 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" event={"ID":"25865701-6601-400a-8cca-606a3cabcc5d","Type":"ContainerStarted","Data":"a84ad9e87f5e8c8335a1da196b8d0f519bb642bdd069850e67fcc03ce3ecee46"} Nov 23 06:46:42 crc kubenswrapper[4681]: W1123 06:46:42.932093 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf9844f_125e_40f1_a45c_784ea466a236.slice/crio-6a4ff2b994181035c2f89ae01f0dfe1a053ad13285be81f9460df39eb1bce2bd WatchSource:0}: Error finding container 6a4ff2b994181035c2f89ae01f0dfe1a053ad13285be81f9460df39eb1bce2bd: Status 404 returned error can't find the container with id 6a4ff2b994181035c2f89ae01f0dfe1a053ad13285be81f9460df39eb1bce2bd Nov 23 06:46:42 crc kubenswrapper[4681]: I1123 06:46:42.981866 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:42 crc kubenswrapper[4681]: E1123 06:46:42.982335 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.482324114 +0000 UTC m=+140.551833351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.013758 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" event={"ID":"882fc762-16ff-41a8-917d-e6b327a4adb5","Type":"ContainerStarted","Data":"2c7079e9d2755aa8d092108a943e2f2d6759a6862746e953824159a3f4a15531"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.016594 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cq2gd"] Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.017402 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" event={"ID":"2fcb132e-fadc-4c84-a103-2e821e006bfa","Type":"ContainerStarted","Data":"37a3438e7cacb9bfaf18537508870fd118a3465039bbe3e2f202ee355a817f03"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.018368 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" event={"ID":"edddb554-81cd-4f1f-ad25-21dc5d5a2c35","Type":"ContainerStarted","Data":"de41a5bd8c130b39a943dacf519c9dd4395b1938fd8294849205dcee7ac909f3"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.019131 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" event={"ID":"c09a8f6b-1519-4cc8-a1e5-ef0261619f3e","Type":"ContainerStarted","Data":"95d35dcdb2a4b0021610c1758b27186921ab6ede021e04138fb6ce6a3124a467"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.019652 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.020202 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hckp7" event={"ID":"3d6df87c-65e5-4899-ad0a-22e9818da7d6","Type":"ContainerStarted","Data":"3354e5d9bac77f22361d203c064641ca77da9290391ce2312f831023fc01e433"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.026340 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nth4c" event={"ID":"862e3345-8b2c-4009-b50c-0fd6025ac9dc","Type":"ContainerStarted","Data":"8247ea0f3bfbf9430497e9c3bf55a80ac7fc1661969a99ebd8588e1ffc8dbd28"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.033289 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" event={"ID":"25f215c0-701b-4a75-9c19-6deeab862309","Type":"ContainerStarted","Data":"f06533ef8523fb0dd3a9505937147b705c019d1134322f275ed3b95c7a01cce9"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.053926 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj"] Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.056167 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" event={"ID":"f396efd2-0a8e-44bb-98c8-ad10c3383cef","Type":"ContainerStarted","Data":"fdb8bc0b46c9215235744e13d496c927e5511f1614b92f1f9dd812fd55e53bdf"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.063545 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.082532 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.082981 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.58296271 +0000 UTC m=+140.652471948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.083327 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.084374 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.58436425 +0000 UTC m=+140.653873487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.091127 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cdnsn" event={"ID":"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a","Type":"ContainerStarted","Data":"e937d9ca90848144e36c57bfbe1e316d9cc5cb7ab2388fb6d0cbd5bc12653f43"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.105400 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sqg25" podStartSLOduration=116.105389269 podStartE2EDuration="1m56.105389269s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.105019259 +0000 UTC m=+140.174528496" watchObservedRunningTime="2025-11-23 06:46:43.105389269 +0000 UTC m=+140.174898506" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.109901 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" event={"ID":"1ed3437e-7360-4cc6-a4d5-b54d2f761945","Type":"ContainerStarted","Data":"4927789ba347a0dfeb6308df6a88c126b98e040f3c4a92a0473e9d7b028d6c85"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.109937 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" event={"ID":"1ed3437e-7360-4cc6-a4d5-b54d2f761945","Type":"ContainerStarted","Data":"fc5ee57a86981fa5d5158f8b79f28a4bd93b98fd7a34051f73e8deaf393d3c16"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.146323 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-b7ms9" podStartSLOduration=115.146312054 podStartE2EDuration="1m55.146312054s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.145432742 +0000 UTC m=+140.214941978" watchObservedRunningTime="2025-11-23 06:46:43.146312054 +0000 UTC m=+140.215821291" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.185008 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.186773 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.686751575 +0000 UTC m=+140.756260812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.189255 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" event={"ID":"28a78e7d-ae79-4791-aa1f-6398f611c561","Type":"ContainerStarted","Data":"b6da52f55cbfe006d0dd218cd6cb4817e781c54d8782f2779ce3e476489ff8cf"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.189373 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.223218 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-59rqt" event={"ID":"c0e3f5d0-037c-48b9-888f-375c10e5f269","Type":"ContainerStarted","Data":"c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.233717 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qkccb" podStartSLOduration=115.233688893 podStartE2EDuration="1m55.233688893s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.198322686 +0000 UTC m=+140.267831923" watchObservedRunningTime="2025-11-23 06:46:43.233688893 +0000 UTC m=+140.303198120" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.234381 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mj9j9" podStartSLOduration=116.234376313 podStartE2EDuration="1m56.234376313s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.232684464 +0000 UTC m=+140.302193701" watchObservedRunningTime="2025-11-23 06:46:43.234376313 +0000 UTC m=+140.303885550" Nov 23 06:46:43 crc kubenswrapper[4681]: W1123 06:46:43.237834 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e4885ac_8d00_41be_9ccb_34386e8be5f9.slice/crio-d83bd46ffd89b3c817c9a4000038eb4096ed42af00b5913f43dd6c1c8e11257a WatchSource:0}: Error finding container d83bd46ffd89b3c817c9a4000038eb4096ed42af00b5913f43dd6c1c8e11257a: Status 404 returned error can't find the container with id d83bd46ffd89b3c817c9a4000038eb4096ed42af00b5913f43dd6c1c8e11257a Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.240600 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" event={"ID":"373b7163-d058-419c-b4c5-b76a80f78dfa","Type":"ContainerStarted","Data":"30bb7ebec39b83aa991a4787ae2a64fe60e276470cd3e2852d21ba99a9c5aaa4"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.280699 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" event={"ID":"7d2b9e38-a7cf-43bb-aa89-861571046aee","Type":"ContainerStarted","Data":"7cf80d36be4760f447f03afe8d705c66a7ffa3196f27f429fa8699b56ef5e740"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.288202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.290976 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.790965082 +0000 UTC m=+140.860474319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.300310 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ljsqd"] Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.318351 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" event={"ID":"1946d763-61f9-468c-84d1-15f635ae5aa8","Type":"ContainerStarted","Data":"3e4e70452661820a3fcc36779f02d286ef9f990c9c1022901e1a8219a1c2544f"} Nov 23 06:46:43 crc kubenswrapper[4681]: W1123 06:46:43.379523 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05ad0d6e_3a38_4afe_b144_2a3550c21799.slice/crio-833439f6c8bdc393ba4c82878eea7bc64bd006dcda259a7ad2e7243272200467 WatchSource:0}: Error finding container 833439f6c8bdc393ba4c82878eea7bc64bd006dcda259a7ad2e7243272200467: Status 404 returned error can't find the container with id 833439f6c8bdc393ba4c82878eea7bc64bd006dcda259a7ad2e7243272200467 Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.379657 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" event={"ID":"ee2298af-3eaf-4b52-9783-e7887fe452f4","Type":"ContainerStarted","Data":"33af71c9a1e7ea1b56b44dd0ce641fb63f9c55cdb6ba0ecf99117e4305b9f764"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.382815 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" podStartSLOduration=115.382795582 podStartE2EDuration="1m55.382795582s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.343562553 +0000 UTC m=+140.413071790" watchObservedRunningTime="2025-11-23 06:46:43.382795582 +0000 UTC m=+140.452304820" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.385822 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" event={"ID":"a73787c8-407a-4e02-8c50-7205b96c76b8","Type":"ContainerStarted","Data":"df22f85f2bf6e6780a1147b86ec0a58987eeafca350a27c63a3d666047788c42"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.388031 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7swg"] Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.388934 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.389286 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.88927253 +0000 UTC m=+140.958781768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.389431 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.390934 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:43.890921107 +0000 UTC m=+140.960430345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.458057 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" event={"ID":"dae5706a-d59e-40ba-9546-7bed3f4f77aa","Type":"ContainerStarted","Data":"35b27457d5b4e697d57a5dc872b6fc07d1b2840769712a3fe44bef9d86db17a2"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.476262 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qmhqk"] Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.504197 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.504315 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.004301309 +0000 UTC m=+141.073810547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.504869 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.505294 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.005263178 +0000 UTC m=+141.074772415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.520890 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5"] Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.526947 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" event={"ID":"74c2583d-61ac-4c6e-8cb5-11427314ecad","Type":"ContainerStarted","Data":"514b317ee017d34159e00ed216b1be10d4f82591f9edd33ca9c4eead9e4c191a"} Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.548672 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-qkccb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.548716 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qkccb" podUID="e5135d02-57f8-48f3-96d3-af0fb70e8ac3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.564721 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.578642 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lqxzb" podStartSLOduration=115.578630315 podStartE2EDuration="1m55.578630315s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.574413992 +0000 UTC m=+140.643923229" watchObservedRunningTime="2025-11-23 06:46:43.578630315 +0000 UTC m=+140.648139552" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.603535 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z76mp"] Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.605016 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" podStartSLOduration=116.605003827 podStartE2EDuration="1m56.605003827s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.601442793 +0000 UTC m=+140.670952030" watchObservedRunningTime="2025-11-23 06:46:43.605003827 +0000 UTC m=+140.674513064" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.619625 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.619824 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.119809575 +0000 UTC m=+141.189318812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.619908 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.621605 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.121589831 +0000 UTC m=+141.191099068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.701604 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hkqhz" podStartSLOduration=115.701591194 podStartE2EDuration="1m55.701591194s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.699157681 +0000 UTC m=+140.768666919" watchObservedRunningTime="2025-11-23 06:46:43.701591194 +0000 UTC m=+140.771100430" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.725192 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.725780 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.225763283 +0000 UTC m=+141.295272520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.776215 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72qnq" podStartSLOduration=115.776201411 podStartE2EDuration="1m55.776201411s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.774522095 +0000 UTC m=+140.844031332" watchObservedRunningTime="2025-11-23 06:46:43.776201411 +0000 UTC m=+140.845710648" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.827278 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.827562 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:43 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:43 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:43 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.836735 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.827977 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.327967109 +0000 UTC m=+141.397476346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.859202 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-59rqt" podStartSLOduration=115.85918923 podStartE2EDuration="1m55.85918923s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.857920351 +0000 UTC m=+140.927429588" watchObservedRunningTime="2025-11-23 06:46:43.85918923 +0000 UTC m=+140.928698467" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.904181 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" podStartSLOduration=103.904166141 podStartE2EDuration="1m43.904166141s" podCreationTimestamp="2025-11-23 06:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.903051633 +0000 UTC m=+140.972560870" watchObservedRunningTime="2025-11-23 06:46:43.904166141 +0000 UTC m=+140.973675378" Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.942883 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:43 crc kubenswrapper[4681]: E1123 06:46:43.943355 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.443342283 +0000 UTC m=+141.512851520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:43 crc kubenswrapper[4681]: I1123 06:46:43.959383 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmxqk" podStartSLOduration=116.959370613 podStartE2EDuration="1m56.959370613s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:43.957687569 +0000 UTC m=+141.027196807" watchObservedRunningTime="2025-11-23 06:46:43.959370613 +0000 UTC m=+141.028879849" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.043986 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.044339 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.544329058 +0000 UTC m=+141.613838295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.153080 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.153343 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.65333085 +0000 UTC m=+141.722840087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.268879 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.278400 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.778386179 +0000 UTC m=+141.847895416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.371998 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.372718 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.872704304 +0000 UTC m=+141.942213540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.473351 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.473701 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:44.973690057 +0000 UTC m=+142.043199294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.557151 4681 generic.go:334] "Generic (PLEG): container finished" podID="f396efd2-0a8e-44bb-98c8-ad10c3383cef" containerID="ba65bf3d18c9fbee3d5706d02f02712883f5cd25fe38722e19d63399200fcbec" exitCode=0 Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.557212 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" event={"ID":"f396efd2-0a8e-44bb-98c8-ad10c3383cef","Type":"ContainerDied","Data":"ba65bf3d18c9fbee3d5706d02f02712883f5cd25fe38722e19d63399200fcbec"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.574823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.575328 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.075317484 +0000 UTC m=+142.144826721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.583527 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" event={"ID":"f249707f-34f7-4964-9cd9-9c83df2f3056","Type":"ContainerStarted","Data":"1f8fe7eca422b1fe1a6bfaac740a1ddcd25bd462c6a682a56e9f8538c32f590a"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.617281 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" event={"ID":"1ed3437e-7360-4cc6-a4d5-b54d2f761945","Type":"ContainerStarted","Data":"a203917cb8a69065517eb4f8c97f199b1a0ff77e262e1efee4ae08bcd2238985"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.624026 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" podStartSLOduration=116.624015762 podStartE2EDuration="1m56.624015762s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:44.570967558 +0000 UTC m=+141.640476796" watchObservedRunningTime="2025-11-23 06:46:44.624015762 +0000 UTC m=+141.693525000" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.630569 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" event={"ID":"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6","Type":"ContainerStarted","Data":"394cbb47006b6729ec1c385f81c5e8297413d8f7790916a89492aeed03a2c222"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.630608 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" event={"ID":"6f4ea567-ba40-47b7-970f-fbcd8b9e44b6","Type":"ContainerStarted","Data":"2459f9e6d0859d2ae7a53ca45d1c9235a2dbfbe4c03b25886b30dcdd220a2d0f"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.632749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" event={"ID":"373b7163-d058-419c-b4c5-b76a80f78dfa","Type":"ContainerStarted","Data":"60659b9df91cb1dbf245b4ee5b971c3af09f8771dbb8ec9d9dc48d8ec65cdd8a"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.634520 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nth4c" event={"ID":"862e3345-8b2c-4009-b50c-0fd6025ac9dc","Type":"ContainerStarted","Data":"cc8f51b47b314ef2cc938559cd3866f48d1f9c9d55d7b0b824d3bde4b5bfc82f"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.634946 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.636489 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-nth4c container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.636515 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nth4c" podUID="862e3345-8b2c-4009-b50c-0fd6025ac9dc" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.666535 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" event={"ID":"420fa719-fac4-4ed4-ab06-f72adbdcf568","Type":"ContainerStarted","Data":"8d05dcc9c0fbef463ff2c5ed3e984a521e40d4fea109e39b08c720212e4e8b9b"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.667290 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" event={"ID":"01287236-92c0-4946-918f-bd641d4d5435","Type":"ContainerStarted","Data":"c2ac7456350fa68a846a796529a4c0fce002a58b1b3ac2565390e78cb891ae5f"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.676963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.679221 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.179210946 +0000 UTC m=+142.248720182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.717320 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" event={"ID":"2fcb132e-fadc-4c84-a103-2e821e006bfa","Type":"ContainerStarted","Data":"cc2d6ae026a1f735e563c40beca0dd9cff9c285b19f54e7b840dd3120b50e748"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.721585 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" event={"ID":"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a","Type":"ContainerStarted","Data":"e6925ebdede781be55d733c89e72a5a3b68983c63ce3f743ed26c916b2c82e38"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.722335 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" event={"ID":"00c8c8b9-3dab-4fde-8fa7-290140cfd81f","Type":"ContainerStarted","Data":"69841d650324c32d42765c37aab1ab3d5cea9d0501f75db0904916ced3ceaad1"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.724521 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" event={"ID":"05ad0d6e-3a38-4afe-b144-2a3550c21799","Type":"ContainerStarted","Data":"833439f6c8bdc393ba4c82878eea7bc64bd006dcda259a7ad2e7243272200467"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.732041 4681 generic.go:334] "Generic (PLEG): container finished" podID="72c9ca30-e13b-48dd-9c5d-05e6dd4a3368" containerID="721d614561f109a3f723dd57c149bf269f8469548b4bdfbfcc8ddb6b7d3cf3df" exitCode=0 Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.732094 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" event={"ID":"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368","Type":"ContainerDied","Data":"721d614561f109a3f723dd57c149bf269f8469548b4bdfbfcc8ddb6b7d3cf3df"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.737963 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b2dpx" podStartSLOduration=116.737947166 podStartE2EDuration="1m56.737947166s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:44.684450594 +0000 UTC m=+141.753959832" watchObservedRunningTime="2025-11-23 06:46:44.737947166 +0000 UTC m=+141.807456403" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.745107 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" event={"ID":"25865701-6601-400a-8cca-606a3cabcc5d","Type":"ContainerStarted","Data":"5cd57caa08e3405a9b299f332f7d57f07bd08445bae8d30d02e5d9d8f351a8a8"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.777518 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.778531 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.278519238 +0000 UTC m=+142.348028475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.792235 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gk8jd" podStartSLOduration=116.792221989 podStartE2EDuration="1m56.792221989s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:44.743410077 +0000 UTC m=+141.812919314" watchObservedRunningTime="2025-11-23 06:46:44.792221989 +0000 UTC m=+141.861731217" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.793649 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cxwjl" event={"ID":"a73787c8-407a-4e02-8c50-7205b96c76b8","Type":"ContainerStarted","Data":"8ad57ff12b51c27d825460720ce73affe0b8104db673a9f5f19fb2489b91865d"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.793806 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhz6x" podStartSLOduration=116.793799973 podStartE2EDuration="1m56.793799973s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:44.791878049 +0000 UTC m=+141.861387286" watchObservedRunningTime="2025-11-23 06:46:44.793799973 +0000 UTC m=+141.863309211" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.807804 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" event={"ID":"4aaef837-ec38-4e22-a3e8-a2e1b4ee71c6","Type":"ContainerStarted","Data":"ccd952395ecc8a21ae491b12afbc0550e4efd2346f1d37fe5c5cbd4f4c49f8c5"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.817707 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-48jrc"] Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.818559 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.819555 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qmhqk" event={"ID":"86289470-a077-471b-b98a-aa1f8eff9f84","Type":"ContainerStarted","Data":"b34d9218bd71c96036611c625686290ff3cb2df5f6ab09647e0244ae6c828a5d"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.820938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" event={"ID":"dae5706a-d59e-40ba-9546-7bed3f4f77aa","Type":"ContainerStarted","Data":"b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.821506 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.822183 4681 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g5zj2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.822214 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 23 06:46:44 crc kubenswrapper[4681]: W1123 06:46:44.827799 4681 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.827833 4681 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.830324 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hckp7" event={"ID":"3d6df87c-65e5-4899-ad0a-22e9818da7d6","Type":"ContainerStarted","Data":"03464de5549820d1bdd0accfe2373a21c1b5f4b524c7fdbb64b6e2a2784bc56c"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.832308 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" event={"ID":"4cf9844f-125e-40f1-a45c-784ea466a236","Type":"ContainerStarted","Data":"94ab512b85c64d844f6b283ca9bedf94ffb56ceeffc4725ad45ec65d8fdbaa69"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.832329 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" event={"ID":"4cf9844f-125e-40f1-a45c-784ea466a236","Type":"ContainerStarted","Data":"6a4ff2b994181035c2f89ae01f0dfe1a053ad13285be81f9460df39eb1bce2bd"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.833157 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:44 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:44 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:44 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.833186 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.836498 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48jrc"] Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.868533 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" event={"ID":"edddb554-81cd-4f1f-ad25-21dc5d5a2c35","Type":"ContainerStarted","Data":"93a20e3d4e1b0aa325d74f2da3d7c34307694f52ac0f1b98954c9ceaff93f852"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.881099 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.882159 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.382139122 +0000 UTC m=+142.451648358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.904600 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" event={"ID":"ee2298af-3eaf-4b52-9783-e7887fe452f4","Type":"ContainerStarted","Data":"9d3e0288e6d7f4904525569d221abee2c6a7be7666854fae457b1acac63eaef8"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.905661 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.912603 4681 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hsxts container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.912633 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" podUID="ee2298af-3eaf-4b52-9783-e7887fe452f4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.913201 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" event={"ID":"e151a32d-c873-40de-8d35-0fa38739718e","Type":"ContainerStarted","Data":"62f8f87af951c14bc4e13bc3e07104dc8fe8dedd5d7471d9b4141ac5ff676f7d"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.914555 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" event={"ID":"47385347-ea0a-46ba-9c22-878470316668","Type":"ContainerStarted","Data":"e74689c2e9b09ff724d2354d3ec552f441dc7f77c0edeadf9e1a29e29700f155"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.914581 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" event={"ID":"47385347-ea0a-46ba-9c22-878470316668","Type":"ContainerStarted","Data":"beafc4b6796f4a094b48b40c8b8dd3d6b31f3e2089f34b6f74866905672fc01c"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.916044 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cdnsn" event={"ID":"b44f298a-45ed-4a54-b2f9-155e2fcf1f2a","Type":"ContainerStarted","Data":"41ac57120a8863e05dbe0135d15866a49586c54afb2c7531b6a26c4041b8d87a"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.917241 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" event={"ID":"4e4885ac-8d00-41be-9ccb-34386e8be5f9","Type":"ContainerStarted","Data":"d83bd46ffd89b3c817c9a4000038eb4096ed42af00b5913f43dd6c1c8e11257a"} Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.919882 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-qkccb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.919909 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qkccb" podUID="e5135d02-57f8-48f3-96d3-af0fb70e8ac3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.931153 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nth4c" podStartSLOduration=116.931143259 podStartE2EDuration="1m56.931143259s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:44.85926102 +0000 UTC m=+141.928770256" watchObservedRunningTime="2025-11-23 06:46:44.931143259 +0000 UTC m=+142.000652496" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.931400 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fdgfd" podStartSLOduration=116.931395646 podStartE2EDuration="1m56.931395646s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:44.928990898 +0000 UTC m=+141.998500135" watchObservedRunningTime="2025-11-23 06:46:44.931395646 +0000 UTC m=+142.000904874" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.982572 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.982747 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-catalog-content\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.983058 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-utilities\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:44 crc kubenswrapper[4681]: I1123 06:46:44.983157 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4svcl\" (UniqueName: \"kubernetes.io/projected/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-kube-api-access-4svcl\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:44 crc kubenswrapper[4681]: E1123 06:46:44.983683 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.483669556 +0000 UTC m=+142.553178792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.030593 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m56bk"] Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.031751 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cdnsn" podStartSLOduration=7.031737662 podStartE2EDuration="7.031737662s" podCreationTimestamp="2025-11-23 06:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.031280117 +0000 UTC m=+142.100789354" watchObservedRunningTime="2025-11-23 06:46:45.031737662 +0000 UTC m=+142.101246899" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.031902 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.058172 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.063598 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m56bk"] Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.084788 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.084820 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-utilities\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.084875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4svcl\" (UniqueName: \"kubernetes.io/projected/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-kube-api-access-4svcl\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.084905 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-catalog-content\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.085260 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-catalog-content\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.085502 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.585492081 +0000 UTC m=+142.655001318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.085716 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-utilities\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.111981 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jdfn" podStartSLOduration=117.111970853 podStartE2EDuration="1m57.111970853s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.110320171 +0000 UTC m=+142.179829407" watchObservedRunningTime="2025-11-23 06:46:45.111970853 +0000 UTC m=+142.181480089" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.124824 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4svcl\" (UniqueName: \"kubernetes.io/projected/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-kube-api-access-4svcl\") pod \"community-operators-48jrc\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.146779 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c26v4" podStartSLOduration=117.146766401 podStartE2EDuration="1m57.146766401s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.144838445 +0000 UTC m=+142.214347682" watchObservedRunningTime="2025-11-23 06:46:45.146766401 +0000 UTC m=+142.216275638" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.187151 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.187378 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-catalog-content\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.187408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkpb7\" (UniqueName: \"kubernetes.io/projected/d43c43f7-de50-40d4-8910-b502d1def095-kube-api-access-mkpb7\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.187474 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-utilities\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.187577 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.687563898 +0000 UTC m=+142.757073135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.222289 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" podStartSLOduration=117.222274356 podStartE2EDuration="1m57.222274356s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.183981504 +0000 UTC m=+142.253490741" watchObservedRunningTime="2025-11-23 06:46:45.222274356 +0000 UTC m=+142.291783592" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.223977 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xzkhc"] Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.224771 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.239011 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" podStartSLOduration=117.238998811 podStartE2EDuration="1m57.238998811s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.228845793 +0000 UTC m=+142.298355030" watchObservedRunningTime="2025-11-23 06:46:45.238998811 +0000 UTC m=+142.308508048" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.250849 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzkhc"] Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.293918 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-catalog-content\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294188 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkpb7\" (UniqueName: \"kubernetes.io/projected/d43c43f7-de50-40d4-8910-b502d1def095-kube-api-access-mkpb7\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294250 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294273 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-utilities\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294288 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-utilities\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294302 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdcfs\" (UniqueName: \"kubernetes.io/projected/bd9b9442-5d36-4b7c-bc39-d403156b0c66-kube-api-access-hdcfs\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294345 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-catalog-content\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294350 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-catalog-content\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.294614 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.794604963 +0000 UTC m=+142.864114200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.294773 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-utilities\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.317315 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bfkn6" podStartSLOduration=117.317304447 podStartE2EDuration="1m57.317304447s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.316325595 +0000 UTC m=+142.385834833" watchObservedRunningTime="2025-11-23 06:46:45.317304447 +0000 UTC m=+142.386813674" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.337520 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkpb7\" (UniqueName: \"kubernetes.io/projected/d43c43f7-de50-40d4-8910-b502d1def095-kube-api-access-mkpb7\") pod \"certified-operators-m56bk\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.362882 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffckq" podStartSLOduration=117.362871424 podStartE2EDuration="1m57.362871424s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.360762125 +0000 UTC m=+142.430271363" watchObservedRunningTime="2025-11-23 06:46:45.362871424 +0000 UTC m=+142.432380660" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.394871 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.395094 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-utilities\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.395115 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdcfs\" (UniqueName: \"kubernetes.io/projected/bd9b9442-5d36-4b7c-bc39-d403156b0c66-kube-api-access-hdcfs\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.395155 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-catalog-content\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.395204 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:45.895193324 +0000 UTC m=+142.964702561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.395511 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-utilities\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.395625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-catalog-content\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.396098 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hckp7" podStartSLOduration=7.396085571 podStartE2EDuration="7.396085571s" podCreationTimestamp="2025-11-23 06:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:45.394887516 +0000 UTC m=+142.464396753" watchObservedRunningTime="2025-11-23 06:46:45.396085571 +0000 UTC m=+142.465594808" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.411832 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jszjx"] Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.412654 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.412894 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.436241 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jszjx"] Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.466983 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdcfs\" (UniqueName: \"kubernetes.io/projected/bd9b9442-5d36-4b7c-bc39-d403156b0c66-kube-api-access-hdcfs\") pod \"community-operators-xzkhc\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.499067 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.499140 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cc72\" (UniqueName: \"kubernetes.io/projected/bcb481cb-7b55-4540-9e64-44a893c3d3f7-kube-api-access-6cc72\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.499192 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-utilities\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.499277 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-catalog-content\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.500426 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.000414496 +0000 UTC m=+143.069923734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.600406 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.600820 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cc72\" (UniqueName: \"kubernetes.io/projected/bcb481cb-7b55-4540-9e64-44a893c3d3f7-kube-api-access-6cc72\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.600862 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-utilities\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.600927 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-catalog-content\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.601265 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-catalog-content\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.601325 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.101313286 +0000 UTC m=+143.170822513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.601754 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-utilities\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.664591 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cc72\" (UniqueName: \"kubernetes.io/projected/bcb481cb-7b55-4540-9e64-44a893c3d3f7-kube-api-access-6cc72\") pod \"certified-operators-jszjx\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.702869 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.703130 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.203120353 +0000 UTC m=+143.272629589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.752708 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.804247 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.804514 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.304491825 +0000 UTC m=+143.374001063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.826620 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:45 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:45 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:45 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.826666 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.869378 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.873617 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.884672 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.907995 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:45 crc kubenswrapper[4681]: E1123 06:46:45.908282 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.408263597 +0000 UTC m=+143.477772834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.954939 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" event={"ID":"74c2583d-61ac-4c6e-8cb5-11427314ecad","Type":"ContainerStarted","Data":"2f5f09eac890fecd8af888e69c47db9d610ec7745fcd44f14c3765f4b425409b"} Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.980420 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" event={"ID":"649f5b3b-9d0f-4c11-b4d3-5fcc9761f68a","Type":"ContainerStarted","Data":"f509b08769b72ace75b052b14d0bbf5b9785e071a1053186e66b89850f523f34"} Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.991263 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" event={"ID":"e151a32d-c873-40de-8d35-0fa38739718e","Type":"ContainerStarted","Data":"0d87c8ffa87bc320463fe356e4069204be26f38021f43d5321d01cd3705bfcc2"} Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.991880 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.997867 4681 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-z5fk5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Nov 23 06:46:45 crc kubenswrapper[4681]: I1123 06:46:45.997894 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" podUID="e151a32d-c873-40de-8d35-0fa38739718e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.009507 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.009943 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.509931591 +0000 UTC m=+143.579440828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.013828 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" event={"ID":"01287236-92c0-4946-918f-bd641d4d5435","Type":"ContainerStarted","Data":"02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.017681 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.046020 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cq2gd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.048650 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" podUID="01287236-92c0-4946-918f-bd641d4d5435" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.063739 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" event={"ID":"420fa719-fac4-4ed4-ab06-f72adbdcf568","Type":"ContainerStarted","Data":"d9acf40ccc57e4a6c50799d766a807ff518f6f5bdae073bad716c6573dc993de"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.083455 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" event={"ID":"72c9ca30-e13b-48dd-9c5d-05e6dd4a3368","Type":"ContainerStarted","Data":"cd6239ec25d820e79c7acd3b12f2def6499d183375e9eea39f464217d66588ad"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.104962 4681 generic.go:334] "Generic (PLEG): container finished" podID="882fc762-16ff-41a8-917d-e6b327a4adb5" containerID="2c7079e9d2755aa8d092108a943e2f2d6759a6862746e953824159a3f4a15531" exitCode=0 Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.105019 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" event={"ID":"882fc762-16ff-41a8-917d-e6b327a4adb5","Type":"ContainerDied","Data":"2c7079e9d2755aa8d092108a943e2f2d6759a6862746e953824159a3f4a15531"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.114712 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.117307 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.617296056 +0000 UTC m=+143.686805293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.127144 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" event={"ID":"4cf9844f-125e-40f1-a45c-784ea466a236","Type":"ContainerStarted","Data":"fe0f4bf5136d8c4028b5dcb2ea2ef7978273d13b07782a0ab906de0140cb198f"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.127500 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.142968 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" event={"ID":"f396efd2-0a8e-44bb-98c8-ad10c3383cef","Type":"ContainerStarted","Data":"3aed00a28204fd64df4316a513b6ea15b7d9b53a817ca142f4707f1174dff9fb"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.151668 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" event={"ID":"4e4885ac-8d00-41be-9ccb-34386e8be5f9","Type":"ContainerStarted","Data":"9e7bb01cf77e1012328ec96ff645bba4d28dd1b6c155435c6323a8817cab08a2"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.151690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" event={"ID":"4e4885ac-8d00-41be-9ccb-34386e8be5f9","Type":"ContainerStarted","Data":"c67711ebb348c00583fde03d70ef795805157c97695de65d7b09a252636bf4d5"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.160876 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dl2f8" podStartSLOduration=118.160863342 podStartE2EDuration="1m58.160863342s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.143117945 +0000 UTC m=+143.212627181" watchObservedRunningTime="2025-11-23 06:46:46.160863342 +0000 UTC m=+143.230372578" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.161691 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qmhqk" event={"ID":"86289470-a077-471b-b98a-aa1f8eff9f84","Type":"ContainerStarted","Data":"42a3627932235d75955660c989d1564daaf2d26324f11d01d3f924812ac890cf"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.161710 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.161708 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8mv9d" podStartSLOduration=118.161703229 podStartE2EDuration="1m58.161703229s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.048322507 +0000 UTC m=+143.117831745" watchObservedRunningTime="2025-11-23 06:46:46.161703229 +0000 UTC m=+143.231212467" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.171970 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" event={"ID":"05ad0d6e-3a38-4afe-b144-2a3550c21799","Type":"ContainerStarted","Data":"5b1498d75d54fc49876b525e7c078748bbe1ae95f1d2250889ddfe0e11226289"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.171999 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" event={"ID":"05ad0d6e-3a38-4afe-b144-2a3550c21799","Type":"ContainerStarted","Data":"bd31b2b7e6781825a64027c57e2117147bbdabf8cebef073e041d61d4b7767e9"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.193714 4681 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g5zj2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.193748 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.193901 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" event={"ID":"2fcb132e-fadc-4c84-a103-2e821e006bfa","Type":"ContainerStarted","Data":"8b8a65a04713d3d1932fbde8005849490f1ac2797cabca87e30d77451c693ce7"} Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.207759 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsxts" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.211170 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42z7r" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.215886 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.216048 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.716031995 +0000 UTC m=+143.785541232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.216328 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.217957 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.717949671 +0000 UTC m=+143.787458909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.308728 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" podStartSLOduration=119.308714097 podStartE2EDuration="1m59.308714097s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.308011217 +0000 UTC m=+143.377520455" watchObservedRunningTime="2025-11-23 06:46:46.308714097 +0000 UTC m=+143.378223334" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.317332 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.317584 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.817567549 +0000 UTC m=+143.887076785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.321664 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.321843 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.821831091 +0000 UTC m=+143.891340327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.386870 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" podStartSLOduration=118.386855601 podStartE2EDuration="1m58.386855601s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.356529655 +0000 UTC m=+143.426038892" watchObservedRunningTime="2025-11-23 06:46:46.386855601 +0000 UTC m=+143.456364838" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.387279 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qmhqk" podStartSLOduration=8.387275626 podStartE2EDuration="8.387275626s" podCreationTimestamp="2025-11-23 06:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.386418925 +0000 UTC m=+143.455928163" watchObservedRunningTime="2025-11-23 06:46:46.387275626 +0000 UTC m=+143.456784863" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.422916 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.423304 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:46.923286041 +0000 UTC m=+143.992795278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.522203 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5kgmj" podStartSLOduration=118.522188014 podStartE2EDuration="1m58.522188014s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.521408862 +0000 UTC m=+143.590918099" watchObservedRunningTime="2025-11-23 06:46:46.522188014 +0000 UTC m=+143.591697252" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.523732 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" podStartSLOduration=118.523723208 podStartE2EDuration="1m58.523723208s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.485488406 +0000 UTC m=+143.554997643" watchObservedRunningTime="2025-11-23 06:46:46.523723208 +0000 UTC m=+143.593232445" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.524259 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.524593 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.024584486 +0000 UTC m=+144.094093723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.626415 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.627901 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.127869418 +0000 UTC m=+144.197378655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.627960 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.628326 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.128318908 +0000 UTC m=+144.197828145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.629491 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gmtff" podStartSLOduration=119.629478389 podStartE2EDuration="1m59.629478389s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.628224399 +0000 UTC m=+143.697733635" watchObservedRunningTime="2025-11-23 06:46:46.629478389 +0000 UTC m=+143.698987627" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.707152 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" podStartSLOduration=118.707138895 podStartE2EDuration="1m58.707138895s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.680329839 +0000 UTC m=+143.749839077" watchObservedRunningTime="2025-11-23 06:46:46.707138895 +0000 UTC m=+143.776648133" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.729678 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.730131 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.230119442 +0000 UTC m=+144.299628679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.737398 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ljsqd" podStartSLOduration=118.737383317 podStartE2EDuration="1m58.737383317s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.710677817 +0000 UTC m=+143.780187054" watchObservedRunningTime="2025-11-23 06:46:46.737383317 +0000 UTC m=+143.806892554" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.826677 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:46 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:46 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:46 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.826726 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.832351 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.832757 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.332746439 +0000 UTC m=+144.402255676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.925255 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-j7swg" podStartSLOduration=118.925241055 podStartE2EDuration="1m58.925241055s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:46.767736706 +0000 UTC m=+143.837245943" watchObservedRunningTime="2025-11-23 06:46:46.925241055 +0000 UTC m=+143.994750292" Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.926107 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m56bk"] Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.933224 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.933397 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.433373934 +0000 UTC m=+144.502883172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: I1123 06:46:46.933498 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:46 crc kubenswrapper[4681]: E1123 06:46:46.933783 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.433774982 +0000 UTC m=+144.503284220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:46 crc kubenswrapper[4681]: W1123 06:46:46.934415 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd43c43f7_de50_40d4_8910_b502d1def095.slice/crio-61d7e560a60cbdae37710cd9ca9fdc30d2980ace0a168e65de5cf07340fb1d90 WatchSource:0}: Error finding container 61d7e560a60cbdae37710cd9ca9fdc30d2980ace0a168e65de5cf07340fb1d90: Status 404 returned error can't find the container with id 61d7e560a60cbdae37710cd9ca9fdc30d2980ace0a168e65de5cf07340fb1d90 Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.021025 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fmqjr"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.021861 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.032486 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.034108 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.034310 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.534292511 +0000 UTC m=+144.603801748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.052049 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmqjr"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.135219 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.135337 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-utilities\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.135370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-catalog-content\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.135390 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q28s\" (UniqueName: \"kubernetes.io/projected/d106e4dc-f7ce-4270-9229-573ec5586711-kube-api-access-6q28s\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.135661 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.63565086 +0000 UTC m=+144.705160096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.193398 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-nth4c container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.193503 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nth4c" podUID="862e3345-8b2c-4009-b50c-0fd6025ac9dc" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.199209 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" event={"ID":"f396efd2-0a8e-44bb-98c8-ad10c3383cef","Type":"ContainerStarted","Data":"06473d5a01f4aa0ee44fd0c1f11c891e3ce0701991b4fea9fce51394c0278cd4"} Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.201106 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m56bk" event={"ID":"d43c43f7-de50-40d4-8910-b502d1def095","Type":"ContainerStarted","Data":"bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7"} Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.201133 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m56bk" event={"ID":"d43c43f7-de50-40d4-8910-b502d1def095","Type":"ContainerStarted","Data":"61d7e560a60cbdae37710cd9ca9fdc30d2980ace0a168e65de5cf07340fb1d90"} Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.202773 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qmhqk" event={"ID":"86289470-a077-471b-b98a-aa1f8eff9f84","Type":"ContainerStarted","Data":"c67085f8b0ad1139a664401bf199a6c86365c2508d76b5e46c09d541731adf84"} Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.204816 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" event={"ID":"00c8c8b9-3dab-4fde-8fa7-290140cfd81f","Type":"ContainerStarted","Data":"f4cb31e3beb810e7798d437397815ea2c2963f57e2442759d1b35803aae920c1"} Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.206139 4681 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g5zj2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.206179 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.235706 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.235918 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-utilities\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.235947 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-catalog-content\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.235967 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q28s\" (UniqueName: \"kubernetes.io/projected/d106e4dc-f7ce-4270-9229-573ec5586711-kube-api-access-6q28s\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.236339 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.736327417 +0000 UTC m=+144.805836654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.236647 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-utilities\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.236833 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-catalog-content\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.268982 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nth4c" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.276413 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q28s\" (UniqueName: \"kubernetes.io/projected/d106e4dc-f7ce-4270-9229-573ec5586711-kube-api-access-6q28s\") pod \"redhat-marketplace-fmqjr\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.282125 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" podStartSLOduration=120.282114752 podStartE2EDuration="2m0.282114752s" podCreationTimestamp="2025-11-23 06:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:47.279439783 +0000 UTC m=+144.348949020" watchObservedRunningTime="2025-11-23 06:46:47.282114752 +0000 UTC m=+144.351623989" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.284314 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jszjx"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.339637 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.346195 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.350951 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.850917577 +0000 UTC m=+144.920426814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.403342 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48jrc"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.434562 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bbdhw"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.438995 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.439118 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbdhw"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.440529 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.440977 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:47.940955448 +0000 UTC m=+145.010464686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.510595 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzkhc"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.549151 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.549207 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-catalog-content\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.549300 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrs9n\" (UniqueName: \"kubernetes.io/projected/233c1d06-f0dd-46c4-8b90-e213255bf126-kube-api-access-vrs9n\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.549326 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-utilities\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.549697 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.049681669 +0000 UTC m=+145.119190906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.650550 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.650852 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrs9n\" (UniqueName: \"kubernetes.io/projected/233c1d06-f0dd-46c4-8b90-e213255bf126-kube-api-access-vrs9n\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.650882 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-utilities\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.650931 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-catalog-content\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.651248 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-catalog-content\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.651308 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.151294709 +0000 UTC m=+145.220803935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.651694 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-utilities\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.716505 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrs9n\" (UniqueName: \"kubernetes.io/projected/233c1d06-f0dd-46c4-8b90-e213255bf126-kube-api-access-vrs9n\") pod \"redhat-marketplace-bbdhw\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.752426 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.752731 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.252715834 +0000 UTC m=+145.322225072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.798931 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.826676 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:47 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:47 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:47 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.826889 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.829537 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.830029 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.834990 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.843326 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.856361 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.856670 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.356655635 +0000 UTC m=+145.426164871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.885013 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.958701 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.958923 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:47 crc kubenswrapper[4681]: I1123 06:46:47.958949 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:47 crc kubenswrapper[4681]: E1123 06:46:47.959184 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.459174548 +0000 UTC m=+145.528683785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.009591 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nqkpz"] Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.010409 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.012864 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.043085 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqkpz"] Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.063505 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.063863 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.563849386 +0000 UTC m=+145.633358623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.063996 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdj9b\" (UniqueName: \"kubernetes.io/projected/fdfd882e-f012-452f-8709-32ddb2ddb019-kube-api-access-tdj9b\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.064038 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.064059 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.064117 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-utilities\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.064162 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.064179 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-catalog-content\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.064626 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.564618531 +0000 UTC m=+145.634127767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.064663 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.066151 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.091370 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.137157 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.164737 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume\") pod \"882fc762-16ff-41a8-917d-e6b327a4adb5\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.165150 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.165184 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7526x\" (UniqueName: \"kubernetes.io/projected/882fc762-16ff-41a8-917d-e6b327a4adb5-kube-api-access-7526x\") pod \"882fc762-16ff-41a8-917d-e6b327a4adb5\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.165233 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882fc762-16ff-41a8-917d-e6b327a4adb5-secret-volume\") pod \"882fc762-16ff-41a8-917d-e6b327a4adb5\" (UID: \"882fc762-16ff-41a8-917d-e6b327a4adb5\") " Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.165365 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-catalog-content\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.165419 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdj9b\" (UniqueName: \"kubernetes.io/projected/fdfd882e-f012-452f-8709-32ddb2ddb019-kube-api-access-tdj9b\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.169020 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-utilities\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.169419 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-utilities\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.170471 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume" (OuterVolumeSpecName: "config-volume") pod "882fc762-16ff-41a8-917d-e6b327a4adb5" (UID: "882fc762-16ff-41a8-917d-e6b327a4adb5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.170597 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.670575304 +0000 UTC m=+145.740084542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.170751 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-catalog-content\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.189275 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882fc762-16ff-41a8-917d-e6b327a4adb5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "882fc762-16ff-41a8-917d-e6b327a4adb5" (UID: "882fc762-16ff-41a8-917d-e6b327a4adb5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.193288 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882fc762-16ff-41a8-917d-e6b327a4adb5-kube-api-access-7526x" (OuterVolumeSpecName: "kube-api-access-7526x") pod "882fc762-16ff-41a8-917d-e6b327a4adb5" (UID: "882fc762-16ff-41a8-917d-e6b327a4adb5"). InnerVolumeSpecName "kube-api-access-7526x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.195638 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmqjr"] Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.213491 4681 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-z5fk5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.213525 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" podUID="e151a32d-c873-40de-8d35-0fa38739718e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.215128 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdj9b\" (UniqueName: \"kubernetes.io/projected/fdfd882e-f012-452f-8709-32ddb2ddb019-kube-api-access-tdj9b\") pod \"redhat-operators-nqkpz\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: W1123 06:46:48.234539 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd106e4dc_f7ce_4270_9229_573ec5586711.slice/crio-fe2b9bfce3a14abd90525ebf325704a99bcc0161ec9b23b98819863b0bd93dba WatchSource:0}: Error finding container fe2b9bfce3a14abd90525ebf325704a99bcc0161ec9b23b98819863b0bd93dba: Status 404 returned error can't find the container with id fe2b9bfce3a14abd90525ebf325704a99bcc0161ec9b23b98819863b0bd93dba Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.245085 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.273493 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.273617 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7526x\" (UniqueName: \"kubernetes.io/projected/882fc762-16ff-41a8-917d-e6b327a4adb5-kube-api-access-7526x\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.273635 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882fc762-16ff-41a8-917d-e6b327a4adb5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.273644 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882fc762-16ff-41a8-917d-e6b327a4adb5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.273872 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.773857721 +0000 UTC m=+145.843366958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.274503 4681 generic.go:334] "Generic (PLEG): container finished" podID="d43c43f7-de50-40d4-8910-b502d1def095" containerID="bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7" exitCode=0 Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.274563 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m56bk" event={"ID":"d43c43f7-de50-40d4-8910-b502d1def095","Type":"ContainerDied","Data":"bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.276678 4681 generic.go:334] "Generic (PLEG): container finished" podID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerID="83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0" exitCode=0 Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.276723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jszjx" event={"ID":"bcb481cb-7b55-4540-9e64-44a893c3d3f7","Type":"ContainerDied","Data":"83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.276739 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jszjx" event={"ID":"bcb481cb-7b55-4540-9e64-44a893c3d3f7","Type":"ContainerStarted","Data":"fc76537d0a3407048faae938465f5c9e1be3ba78acfe754004026f598a8f715e"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.301629 4681 generic.go:334] "Generic (PLEG): container finished" podID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerID="f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9" exitCode=0 Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.301936 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48jrc" event={"ID":"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9","Type":"ContainerDied","Data":"f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.301976 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48jrc" event={"ID":"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9","Type":"ContainerStarted","Data":"0e588b0fb0ea80685d1d5adff29acbef301ffe87a3bd1c50a0a3973dcbdcb875"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.308823 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.320379 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" event={"ID":"00c8c8b9-3dab-4fde-8fa7-290140cfd81f","Type":"ContainerStarted","Data":"d151020eaab3855fd3595b9d463aa768c0fdc52fb795e8744db049b9b0ff6235"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.320430 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" event={"ID":"00c8c8b9-3dab-4fde-8fa7-290140cfd81f","Type":"ContainerStarted","Data":"056bbbd3389672ef68170589e5169b00e891c964dd508dceb25ca7daa67df615"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.323662 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.374869 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.375804 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.875791928 +0000 UTC m=+145.945301165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.399572 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" event={"ID":"882fc762-16ff-41a8-917d-e6b327a4adb5","Type":"ContainerDied","Data":"b8fd65b704074169da9a883c74f03bdd7bff197321f0b0ca9c2dbc60aef26ea9"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.399600 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8fd65b704074169da9a883c74f03bdd7bff197321f0b0ca9c2dbc60aef26ea9" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.399659 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.427190 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2m4k"] Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.427370 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882fc762-16ff-41a8-917d-e6b327a4adb5" containerName="collect-profiles" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.427385 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="882fc762-16ff-41a8-917d-e6b327a4adb5" containerName="collect-profiles" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.435018 4681 generic.go:334] "Generic (PLEG): container finished" podID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerID="4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5" exitCode=0 Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.436514 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="882fc762-16ff-41a8-917d-e6b327a4adb5" containerName="collect-profiles" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.437111 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzkhc" event={"ID":"bd9b9442-5d36-4b7c-bc39-d403156b0c66","Type":"ContainerDied","Data":"4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.437136 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzkhc" event={"ID":"bd9b9442-5d36-4b7c-bc39-d403156b0c66","Type":"ContainerStarted","Data":"a69ca95cbea9e4131c524bc1c29e2399137d76b60c43078fa4d187d127a55ca7"} Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.437205 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.477967 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.478533 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:48.978522351 +0000 UTC m=+146.048031588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.506392 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2m4k"] Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.506545 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z5fk5" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.579976 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.580387 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-catalog-content\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.580611 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdvx6\" (UniqueName: \"kubernetes.io/projected/8057fec0-6964-4c2a-9c64-79373dd7eb06-kube-api-access-jdvx6\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.580651 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-utilities\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.581227 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.08121377 +0000 UTC m=+146.150723007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.589967 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbdhw"] Nov 23 06:46:48 crc kubenswrapper[4681]: W1123 06:46:48.613210 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod233c1d06_f0dd_46c4_8b90_e213255bf126.slice/crio-e04a41d8262b73c3fe1f989af6def92f58dbc93d66044270a213da9c5532f7c8 WatchSource:0}: Error finding container e04a41d8262b73c3fe1f989af6def92f58dbc93d66044270a213da9c5532f7c8: Status 404 returned error can't find the container with id e04a41d8262b73c3fe1f989af6def92f58dbc93d66044270a213da9c5532f7c8 Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.691922 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdvx6\" (UniqueName: \"kubernetes.io/projected/8057fec0-6964-4c2a-9c64-79373dd7eb06-kube-api-access-jdvx6\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.692193 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-utilities\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.692329 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-catalog-content\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.692365 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.692707 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.192695982 +0000 UTC m=+146.262205219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.692715 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-utilities\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.692926 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-catalog-content\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.704778 4681 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.723287 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdvx6\" (UniqueName: \"kubernetes.io/projected/8057fec0-6964-4c2a-9c64-79373dd7eb06-kube-api-access-jdvx6\") pod \"redhat-operators-j2m4k\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.757072 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.794937 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.795174 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.295150302 +0000 UTC m=+146.364659539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.795430 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.795813 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.295801864 +0000 UTC m=+146.365311101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.796219 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.829782 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:48 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:48 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:48 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.829828 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.899053 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.899295 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.399273759 +0000 UTC m=+146.468782996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:48 crc kubenswrapper[4681]: I1123 06:46:48.899647 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:48 crc kubenswrapper[4681]: E1123 06:46:48.900043 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.400029188 +0000 UTC m=+146.469538425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.000265 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:49 crc kubenswrapper[4681]: E1123 06:46:49.000972 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.500958454 +0000 UTC m=+146.570467691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.065676 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqkpz"] Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.103169 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:49 crc kubenswrapper[4681]: E1123 06:46:49.103599 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.603587986 +0000 UTC m=+146.673097223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.106780 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2m4k"] Nov 23 06:46:49 crc kubenswrapper[4681]: W1123 06:46:49.125281 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8057fec0_6964_4c2a_9c64_79373dd7eb06.slice/crio-9d50829e55e6485e71edb59c693bd5fae63d32181e56f8e72e552fd32c912530 WatchSource:0}: Error finding container 9d50829e55e6485e71edb59c693bd5fae63d32181e56f8e72e552fd32c912530: Status 404 returned error can't find the container with id 9d50829e55e6485e71edb59c693bd5fae63d32181e56f8e72e552fd32c912530 Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.204404 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:49 crc kubenswrapper[4681]: E1123 06:46:49.204794 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.70478047 +0000 UTC m=+146.774289707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.305818 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:49 crc kubenswrapper[4681]: E1123 06:46:49.306158 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:46:49.806143547 +0000 UTC m=+146.875652785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c2pf5" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.386572 4681 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-23T06:46:48.704797465Z","Handler":null,"Name":""} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.397573 4681 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.397612 4681 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.406631 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.415367 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.463329 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" event={"ID":"00c8c8b9-3dab-4fde-8fa7-290140cfd81f","Type":"ContainerStarted","Data":"a9f774ec4b2a01f5feaa35740f765b56e96851a81aa01d5526ede1df7b902587"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.474265 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0ea4db8e-ece7-4de1-aff2-1023fc6763df","Type":"ContainerStarted","Data":"0c731acec1e4f2ad095882f965eaae20ebe8ebea680844327bcf3306b3dd5fb0"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.474322 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0ea4db8e-ece7-4de1-aff2-1023fc6763df","Type":"ContainerStarted","Data":"bdb77f48dcd5294f6d21fca940a2dcd1041062f9deefd367c9128aec1c15f956"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.484411 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-z76mp" podStartSLOduration=11.484395861 podStartE2EDuration="11.484395861s" podCreationTimestamp="2025-11-23 06:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:49.481890504 +0000 UTC m=+146.551399740" watchObservedRunningTime="2025-11-23 06:46:49.484395861 +0000 UTC m=+146.553905098" Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.486910 4681 generic.go:334] "Generic (PLEG): container finished" podID="d106e4dc-f7ce-4270-9229-573ec5586711" containerID="1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2" exitCode=0 Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.486991 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmqjr" event={"ID":"d106e4dc-f7ce-4270-9229-573ec5586711","Type":"ContainerDied","Data":"1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.487021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmqjr" event={"ID":"d106e4dc-f7ce-4270-9229-573ec5586711","Type":"ContainerStarted","Data":"fe2b9bfce3a14abd90525ebf325704a99bcc0161ec9b23b98819863b0bd93dba"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.493957 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.493944779 podStartE2EDuration="2.493944779s" podCreationTimestamp="2025-11-23 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:49.492548127 +0000 UTC m=+146.562057364" watchObservedRunningTime="2025-11-23 06:46:49.493944779 +0000 UTC m=+146.563454015" Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.494376 4681 generic.go:334] "Generic (PLEG): container finished" podID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerID="ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45" exitCode=0 Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.494577 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqkpz" event={"ID":"fdfd882e-f012-452f-8709-32ddb2ddb019","Type":"ContainerDied","Data":"ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.494629 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqkpz" event={"ID":"fdfd882e-f012-452f-8709-32ddb2ddb019","Type":"ContainerStarted","Data":"b84442fa30f0f19a732194ceb049ec68e8556b1625aa78533b6348d9f04b201e"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.498229 4681 generic.go:334] "Generic (PLEG): container finished" podID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerID="152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433" exitCode=0 Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.498284 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2m4k" event={"ID":"8057fec0-6964-4c2a-9c64-79373dd7eb06","Type":"ContainerDied","Data":"152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.498342 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2m4k" event={"ID":"8057fec0-6964-4c2a-9c64-79373dd7eb06","Type":"ContainerStarted","Data":"9d50829e55e6485e71edb59c693bd5fae63d32181e56f8e72e552fd32c912530"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.501450 4681 generic.go:334] "Generic (PLEG): container finished" podID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerID="cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2" exitCode=0 Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.501822 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbdhw" event={"ID":"233c1d06-f0dd-46c4-8b90-e213255bf126","Type":"ContainerDied","Data":"cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.501845 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbdhw" event={"ID":"233c1d06-f0dd-46c4-8b90-e213255bf126","Type":"ContainerStarted","Data":"e04a41d8262b73c3fe1f989af6def92f58dbc93d66044270a213da9c5532f7c8"} Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.514991 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.536959 4681 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.537132 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.573024 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c2pf5\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.679035 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.825906 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:49 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:49 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:49 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:49 crc kubenswrapper[4681]: I1123 06:46:49.825968 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.021754 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c2pf5"] Nov 23 06:46:50 crc kubenswrapper[4681]: W1123 06:46:50.053181 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77f5ceda_2966_443e_a939_dd7408e66bdc.slice/crio-1815ab2337eac67470f4fe2fa16bcb0ac4d2178b9c3dfd0acd5ee4f2a9f6d208 WatchSource:0}: Error finding container 1815ab2337eac67470f4fe2fa16bcb0ac4d2178b9c3dfd0acd5ee4f2a9f6d208: Status 404 returned error can't find the container with id 1815ab2337eac67470f4fe2fa16bcb0ac4d2178b9c3dfd0acd5ee4f2a9f6d208 Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.127678 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.127715 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.127802 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.127839 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.128868 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.134598 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.134859 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.135090 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.169564 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.170381 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.170556 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.528010 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" event={"ID":"77f5ceda-2966-443e-a939-dd7408e66bdc","Type":"ContainerStarted","Data":"3ee984309fa8ce33e23cdf6fc6b644a32685973fac9472dd105a0d6e45df0b48"} Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.528322 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" event={"ID":"77f5ceda-2966-443e-a939-dd7408e66bdc","Type":"ContainerStarted","Data":"1815ab2337eac67470f4fe2fa16bcb0ac4d2178b9c3dfd0acd5ee4f2a9f6d208"} Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.528506 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.536292 4681 generic.go:334] "Generic (PLEG): container finished" podID="0ea4db8e-ece7-4de1-aff2-1023fc6763df" containerID="0c731acec1e4f2ad095882f965eaae20ebe8ebea680844327bcf3306b3dd5fb0" exitCode=0 Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.536528 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0ea4db8e-ece7-4de1-aff2-1023fc6763df","Type":"ContainerDied","Data":"0c731acec1e4f2ad095882f965eaae20ebe8ebea680844327bcf3306b3dd5fb0"} Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.570804 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" podStartSLOduration=122.570784585 podStartE2EDuration="2m2.570784585s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:46:50.547039644 +0000 UTC m=+147.616548901" watchObservedRunningTime="2025-11-23 06:46:50.570784585 +0000 UTC m=+147.640293822" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.612212 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-qkccb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.612293 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qkccb" podUID="e5135d02-57f8-48f3-96d3-af0fb70e8ac3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.614610 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-qkccb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.614671 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qkccb" podUID="e5135d02-57f8-48f3-96d3-af0fb70e8ac3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.823563 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.827211 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:50 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:50 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:50 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.827389 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.828287 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.828333 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.834129 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.861429 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.861879 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.870642 4681 patch_prober.go:28] interesting pod/console-f9d7485db-59rqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 23 06:46:50 crc kubenswrapper[4681]: I1123 06:46:50.870681 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-59rqt" podUID="c0e3f5d0-037c-48b9-888f-375c10e5f269" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 23 06:46:51 crc kubenswrapper[4681]: W1123 06:46:51.058301 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a14e47d1e3429819e7c04b170a84dcb9358b7ef5f45fd62b222f4f471a54af7f WatchSource:0}: Error finding container a14e47d1e3429819e7c04b170a84dcb9358b7ef5f45fd62b222f4f471a54af7f: Status 404 returned error can't find the container with id a14e47d1e3429819e7c04b170a84dcb9358b7ef5f45fd62b222f4f471a54af7f Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.089850 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.089916 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.115764 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.116949 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.287520 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.560846 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"744a25646e65aa015c4a2d9dad3aa826047a03fac81acdfd8376dad20c2aa560"} Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.560913 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b2494ad31671c7c458faf5ccdd1ad5681e61ef6477604bea5e0649ae0b8566a9"} Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.567190 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"971e61358789fcc69c1c59d240d7dd1e9ef8ae31079e654d709f98a18b3e3e9a"} Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.567240 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a14e47d1e3429819e7c04b170a84dcb9358b7ef5f45fd62b222f4f471a54af7f"} Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.567821 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.596481 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b49694f59ec8ebc9eb5a78cada5109b2ff9e55db012549f2fd91cd42d9da7601"} Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.596537 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9c8a22ce0bf9e6befd4a4a15d99bb67b7bf90bb15efc52952c3b1a363b7c15b8"} Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.602930 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rxxxv" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.603619 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d7f7c" Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.835064 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:51 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:51 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:51 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:51 crc kubenswrapper[4681]: I1123 06:46:51.835140 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.142375 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.169896 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kubelet-dir\") pod \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.170084 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kube-api-access\") pod \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\" (UID: \"0ea4db8e-ece7-4de1-aff2-1023fc6763df\") " Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.184313 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0ea4db8e-ece7-4de1-aff2-1023fc6763df" (UID: "0ea4db8e-ece7-4de1-aff2-1023fc6763df"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.189087 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0ea4db8e-ece7-4de1-aff2-1023fc6763df" (UID: "0ea4db8e-ece7-4de1-aff2-1023fc6763df"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.272608 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.272643 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4db8e-ece7-4de1-aff2-1023fc6763df-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.618328 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0ea4db8e-ece7-4de1-aff2-1023fc6763df","Type":"ContainerDied","Data":"bdb77f48dcd5294f6d21fca940a2dcd1041062f9deefd367c9128aec1c15f956"} Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.618373 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb77f48dcd5294f6d21fca940a2dcd1041062f9deefd367c9128aec1c15f956" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.619804 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.825981 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:52 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:52 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:52 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:52 crc kubenswrapper[4681]: I1123 06:46:52.826022 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:53 crc kubenswrapper[4681]: I1123 06:46:53.825546 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:53 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:53 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:53 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:53 crc kubenswrapper[4681]: I1123 06:46:53.825623 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:54 crc kubenswrapper[4681]: I1123 06:46:54.829032 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:54 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:54 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:54 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:54 crc kubenswrapper[4681]: I1123 06:46:54.829114 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.827100 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:55 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:55 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:55 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.827797 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.881964 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:46:55 crc kubenswrapper[4681]: E1123 06:46:55.882175 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea4db8e-ece7-4de1-aff2-1023fc6763df" containerName="pruner" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.882188 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea4db8e-ece7-4de1-aff2-1023fc6763df" containerName="pruner" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.882285 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea4db8e-ece7-4de1-aff2-1023fc6763df" containerName="pruner" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.882588 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.888358 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.888510 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.900807 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.962895 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:55 crc kubenswrapper[4681]: I1123 06:46:55.962934 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.064306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.064350 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.064606 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.083624 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.225930 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.782821 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qmhqk" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.833392 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:56 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:56 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:56 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.833437 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:56 crc kubenswrapper[4681]: I1123 06:46:56.884737 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:46:57 crc kubenswrapper[4681]: I1123 06:46:57.728286 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f","Type":"ContainerStarted","Data":"f366ceb8ee260a95ab9b4867e4261b5fb654601712d6ecabdb469d28a095ba69"} Nov 23 06:46:57 crc kubenswrapper[4681]: I1123 06:46:57.728858 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f","Type":"ContainerStarted","Data":"aa56e6fb261f407d12e7bfe842edb68de2cbf61682da3fe677b800bd76adf20d"} Nov 23 06:46:57 crc kubenswrapper[4681]: I1123 06:46:57.826844 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:57 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:57 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:57 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:57 crc kubenswrapper[4681]: I1123 06:46:57.826877 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:58 crc kubenswrapper[4681]: I1123 06:46:58.751678 4681 generic.go:334] "Generic (PLEG): container finished" podID="ce8f467b-ba3e-4db8-9b80-392e2d0ef58f" containerID="f366ceb8ee260a95ab9b4867e4261b5fb654601712d6ecabdb469d28a095ba69" exitCode=0 Nov 23 06:46:58 crc kubenswrapper[4681]: I1123 06:46:58.751746 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f","Type":"ContainerDied","Data":"f366ceb8ee260a95ab9b4867e4261b5fb654601712d6ecabdb469d28a095ba69"} Nov 23 06:46:58 crc kubenswrapper[4681]: I1123 06:46:58.826839 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:58 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:58 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:58 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:58 crc kubenswrapper[4681]: I1123 06:46:58.827660 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:46:59 crc kubenswrapper[4681]: I1123 06:46:59.826288 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:46:59 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:46:59 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:46:59 crc kubenswrapper[4681]: healthz check failed Nov 23 06:46:59 crc kubenswrapper[4681]: I1123 06:46:59.826604 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:00 crc kubenswrapper[4681]: I1123 06:47:00.616091 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qkccb" Nov 23 06:47:00 crc kubenswrapper[4681]: I1123 06:47:00.826504 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:00 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:47:00 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:47:00 crc kubenswrapper[4681]: healthz check failed Nov 23 06:47:00 crc kubenswrapper[4681]: I1123 06:47:00.826562 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:00 crc kubenswrapper[4681]: I1123 06:47:00.861924 4681 patch_prober.go:28] interesting pod/console-f9d7485db-59rqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 23 06:47:00 crc kubenswrapper[4681]: I1123 06:47:00.861999 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-59rqt" podUID="c0e3f5d0-037c-48b9-888f-375c10e5f269" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 23 06:47:01 crc kubenswrapper[4681]: I1123 06:47:01.401983 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:47:01 crc kubenswrapper[4681]: I1123 06:47:01.825435 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:01 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:47:01 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:47:01 crc kubenswrapper[4681]: healthz check failed Nov 23 06:47:01 crc kubenswrapper[4681]: I1123 06:47:01.825522 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:02 crc kubenswrapper[4681]: I1123 06:47:02.827264 4681 patch_prober.go:28] interesting pod/router-default-5444994796-b7ms9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:02 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Nov 23 06:47:02 crc kubenswrapper[4681]: [+]process-running ok Nov 23 06:47:02 crc kubenswrapper[4681]: healthz check failed Nov 23 06:47:02 crc kubenswrapper[4681]: I1123 06:47:02.827533 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7ms9" podUID="9c6f4ba4-aae8-4308-be38-b74b07116955" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.468987 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.610603 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kubelet-dir\") pod \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.610683 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kube-api-access\") pod \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\" (UID: \"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f\") " Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.610772 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ce8f467b-ba3e-4db8-9b80-392e2d0ef58f" (UID: "ce8f467b-ba3e-4db8-9b80-392e2d0ef58f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.610944 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.627724 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce8f467b-ba3e-4db8-9b80-392e2d0ef58f" (UID: "ce8f467b-ba3e-4db8-9b80-392e2d0ef58f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.711764 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8f467b-ba3e-4db8-9b80-392e2d0ef58f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.787584 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce8f467b-ba3e-4db8-9b80-392e2d0ef58f","Type":"ContainerDied","Data":"aa56e6fb261f407d12e7bfe842edb68de2cbf61682da3fe677b800bd76adf20d"} Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.787622 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa56e6fb261f407d12e7bfe842edb68de2cbf61682da3fe677b800bd76adf20d" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.787671 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.826287 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:47:03 crc kubenswrapper[4681]: I1123 06:47:03.828514 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-b7ms9" Nov 23 06:47:09 crc kubenswrapper[4681]: I1123 06:47:09.482175 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:47:09 crc kubenswrapper[4681]: I1123 06:47:09.486679 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eef1a94-78a8-4389-b1fe-2db3786ba043-metrics-certs\") pod \"network-metrics-daemon-kv72z\" (UID: \"6eef1a94-78a8-4389-b1fe-2db3786ba043\") " pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:47:09 crc kubenswrapper[4681]: I1123 06:47:09.562116 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv72z" Nov 23 06:47:09 crc kubenswrapper[4681]: I1123 06:47:09.684621 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:47:10 crc kubenswrapper[4681]: I1123 06:47:10.865613 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:47:10 crc kubenswrapper[4681]: I1123 06:47:10.870087 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:47:12 crc kubenswrapper[4681]: I1123 06:47:12.295808 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:47:12 crc kubenswrapper[4681]: I1123 06:47:12.295865 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:47:14 crc kubenswrapper[4681]: E1123 06:47:14.940555 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 06:47:14 crc kubenswrapper[4681]: E1123 06:47:14.940936 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q28s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fmqjr_openshift-marketplace(d106e4dc-f7ce-4270-9229-573ec5586711): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:47:14 crc kubenswrapper[4681]: E1123 06:47:14.943170 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fmqjr" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" Nov 23 06:47:15 crc kubenswrapper[4681]: E1123 06:47:15.011362 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 06:47:15 crc kubenswrapper[4681]: E1123 06:47:15.011577 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdcfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xzkhc_openshift-marketplace(bd9b9442-5d36-4b7c-bc39-d403156b0c66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:47:15 crc kubenswrapper[4681]: E1123 06:47:15.012831 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xzkhc" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.405106 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kv72z"] Nov 23 06:47:15 crc kubenswrapper[4681]: W1123 06:47:15.456941 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eef1a94_78a8_4389_b1fe_2db3786ba043.slice/crio-95b6b8cdb7ec0e021d4d4aee644fcfa548869e52087bb1d5d3a74ef2661f7e51 WatchSource:0}: Error finding container 95b6b8cdb7ec0e021d4d4aee644fcfa548869e52087bb1d5d3a74ef2661f7e51: Status 404 returned error can't find the container with id 95b6b8cdb7ec0e021d4d4aee644fcfa548869e52087bb1d5d3a74ef2661f7e51 Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.858412 4681 generic.go:334] "Generic (PLEG): container finished" podID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerID="48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119" exitCode=0 Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.858507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jszjx" event={"ID":"bcb481cb-7b55-4540-9e64-44a893c3d3f7","Type":"ContainerDied","Data":"48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.861829 4681 generic.go:334] "Generic (PLEG): container finished" podID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerID="99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913" exitCode=0 Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.861872 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbdhw" event={"ID":"233c1d06-f0dd-46c4-8b90-e213255bf126","Type":"ContainerDied","Data":"99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.867131 4681 generic.go:334] "Generic (PLEG): container finished" podID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerID="feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9" exitCode=0 Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.867209 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48jrc" event={"ID":"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9","Type":"ContainerDied","Data":"feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.876351 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kv72z" event={"ID":"6eef1a94-78a8-4389-b1fe-2db3786ba043","Type":"ContainerStarted","Data":"1003b2b53d0d7b0073ae218832dac49d31dba1d66d6f67a105a96c6035a78ec1"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.876379 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kv72z" event={"ID":"6eef1a94-78a8-4389-b1fe-2db3786ba043","Type":"ContainerStarted","Data":"871a3cfd79aa6b5fe47c3798dd37dd8d5b586a02fff5898a8ba605c980634c69"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.876390 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kv72z" event={"ID":"6eef1a94-78a8-4389-b1fe-2db3786ba043","Type":"ContainerStarted","Data":"95b6b8cdb7ec0e021d4d4aee644fcfa548869e52087bb1d5d3a74ef2661f7e51"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.880496 4681 generic.go:334] "Generic (PLEG): container finished" podID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerID="01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b" exitCode=0 Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.880560 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqkpz" event={"ID":"fdfd882e-f012-452f-8709-32ddb2ddb019","Type":"ContainerDied","Data":"01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.885064 4681 generic.go:334] "Generic (PLEG): container finished" podID="d43c43f7-de50-40d4-8910-b502d1def095" containerID="c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5" exitCode=0 Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.885726 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m56bk" event={"ID":"d43c43f7-de50-40d4-8910-b502d1def095","Type":"ContainerDied","Data":"c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5"} Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.887444 4681 generic.go:334] "Generic (PLEG): container finished" podID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerID="52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f" exitCode=0 Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.887889 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2m4k" event={"ID":"8057fec0-6964-4c2a-9c64-79373dd7eb06","Type":"ContainerDied","Data":"52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f"} Nov 23 06:47:15 crc kubenswrapper[4681]: E1123 06:47:15.891403 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fmqjr" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" Nov 23 06:47:15 crc kubenswrapper[4681]: E1123 06:47:15.893500 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xzkhc" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" Nov 23 06:47:15 crc kubenswrapper[4681]: I1123 06:47:15.907191 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kv72z" podStartSLOduration=147.90717828 podStartE2EDuration="2m27.90717828s" podCreationTimestamp="2025-11-23 06:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:15.905313764 +0000 UTC m=+172.974823001" watchObservedRunningTime="2025-11-23 06:47:15.90717828 +0000 UTC m=+172.976687517" Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.895323 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jszjx" event={"ID":"bcb481cb-7b55-4540-9e64-44a893c3d3f7","Type":"ContainerStarted","Data":"0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa"} Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.898639 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbdhw" event={"ID":"233c1d06-f0dd-46c4-8b90-e213255bf126","Type":"ContainerStarted","Data":"e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b"} Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.901821 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48jrc" event={"ID":"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9","Type":"ContainerStarted","Data":"32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc"} Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.903766 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqkpz" event={"ID":"fdfd882e-f012-452f-8709-32ddb2ddb019","Type":"ContainerStarted","Data":"0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1"} Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.905631 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m56bk" event={"ID":"d43c43f7-de50-40d4-8910-b502d1def095","Type":"ContainerStarted","Data":"6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44"} Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.908028 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2m4k" event={"ID":"8057fec0-6964-4c2a-9c64-79373dd7eb06","Type":"ContainerStarted","Data":"40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75"} Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.968432 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nqkpz" podStartSLOduration=3.094440342 podStartE2EDuration="29.968420912s" podCreationTimestamp="2025-11-23 06:46:47 +0000 UTC" firstStartedPulling="2025-11-23 06:46:49.495579048 +0000 UTC m=+146.565088286" lastFinishedPulling="2025-11-23 06:47:16.369559619 +0000 UTC m=+173.439068856" observedRunningTime="2025-11-23 06:47:16.955276767 +0000 UTC m=+174.024786004" watchObservedRunningTime="2025-11-23 06:47:16.968420912 +0000 UTC m=+174.037930149" Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.969472 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jszjx" podStartSLOduration=3.950587513 podStartE2EDuration="31.969451692s" podCreationTimestamp="2025-11-23 06:46:45 +0000 UTC" firstStartedPulling="2025-11-23 06:46:48.308553831 +0000 UTC m=+145.378063068" lastFinishedPulling="2025-11-23 06:47:16.32741801 +0000 UTC m=+173.396927247" observedRunningTime="2025-11-23 06:47:16.936839724 +0000 UTC m=+174.006348960" watchObservedRunningTime="2025-11-23 06:47:16.969451692 +0000 UTC m=+174.038960929" Nov 23 06:47:16 crc kubenswrapper[4681]: I1123 06:47:16.985228 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bbdhw" podStartSLOduration=3.172436853 podStartE2EDuration="29.985207587s" podCreationTimestamp="2025-11-23 06:46:47 +0000 UTC" firstStartedPulling="2025-11-23 06:46:49.502601017 +0000 UTC m=+146.572110254" lastFinishedPulling="2025-11-23 06:47:16.315371751 +0000 UTC m=+173.384880988" observedRunningTime="2025-11-23 06:47:16.984209168 +0000 UTC m=+174.053718405" watchObservedRunningTime="2025-11-23 06:47:16.985207587 +0000 UTC m=+174.054716823" Nov 23 06:47:17 crc kubenswrapper[4681]: I1123 06:47:17.009325 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2m4k" podStartSLOduration=2.128535325 podStartE2EDuration="29.009306116s" podCreationTimestamp="2025-11-23 06:46:48 +0000 UTC" firstStartedPulling="2025-11-23 06:46:49.501524141 +0000 UTC m=+146.571033368" lastFinishedPulling="2025-11-23 06:47:16.382294922 +0000 UTC m=+173.451804159" observedRunningTime="2025-11-23 06:47:17.007494982 +0000 UTC m=+174.077004219" watchObservedRunningTime="2025-11-23 06:47:17.009306116 +0000 UTC m=+174.078815353" Nov 23 06:47:17 crc kubenswrapper[4681]: I1123 06:47:17.034654 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-48jrc" podStartSLOduration=5.007063428 podStartE2EDuration="33.034625805s" podCreationTimestamp="2025-11-23 06:46:44 +0000 UTC" firstStartedPulling="2025-11-23 06:46:48.327616759 +0000 UTC m=+145.397125996" lastFinishedPulling="2025-11-23 06:47:16.355179136 +0000 UTC m=+173.424688373" observedRunningTime="2025-11-23 06:47:17.031743967 +0000 UTC m=+174.101253204" watchObservedRunningTime="2025-11-23 06:47:17.034625805 +0000 UTC m=+174.104135043" Nov 23 06:47:17 crc kubenswrapper[4681]: I1123 06:47:17.061095 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m56bk" podStartSLOduration=5.000111241 podStartE2EDuration="33.061071564s" podCreationTimestamp="2025-11-23 06:46:44 +0000 UTC" firstStartedPulling="2025-11-23 06:46:48.308624705 +0000 UTC m=+145.378133942" lastFinishedPulling="2025-11-23 06:47:16.369585028 +0000 UTC m=+173.439094265" observedRunningTime="2025-11-23 06:47:17.058994105 +0000 UTC m=+174.128503343" watchObservedRunningTime="2025-11-23 06:47:17.061071564 +0000 UTC m=+174.130580801" Nov 23 06:47:17 crc kubenswrapper[4681]: I1123 06:47:17.799366 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:47:17 crc kubenswrapper[4681]: I1123 06:47:17.801210 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:47:18 crc kubenswrapper[4681]: I1123 06:47:18.324580 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:47:18 crc kubenswrapper[4681]: I1123 06:47:18.324932 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:47:18 crc kubenswrapper[4681]: I1123 06:47:18.757801 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:47:18 crc kubenswrapper[4681]: I1123 06:47:18.759057 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:47:18 crc kubenswrapper[4681]: I1123 06:47:18.915209 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bbdhw" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="registry-server" probeResult="failure" output=< Nov 23 06:47:18 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 06:47:18 crc kubenswrapper[4681]: > Nov 23 06:47:19 crc kubenswrapper[4681]: I1123 06:47:19.351628 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nqkpz" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="registry-server" probeResult="failure" output=< Nov 23 06:47:19 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 06:47:19 crc kubenswrapper[4681]: > Nov 23 06:47:19 crc kubenswrapper[4681]: I1123 06:47:19.785552 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2m4k" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="registry-server" probeResult="failure" output=< Nov 23 06:47:19 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 06:47:19 crc kubenswrapper[4681]: > Nov 23 06:47:21 crc kubenswrapper[4681]: I1123 06:47:21.636135 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cn5t4" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.414104 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.414571 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.462401 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.753488 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.753530 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.785604 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.874356 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.874405 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.911869 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:47:25 crc kubenswrapper[4681]: I1123 06:47:25.996827 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:47:26 crc kubenswrapper[4681]: I1123 06:47:26.001912 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:47:26 crc kubenswrapper[4681]: I1123 06:47:26.004800 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:47:27 crc kubenswrapper[4681]: I1123 06:47:27.830408 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:47:27 crc kubenswrapper[4681]: I1123 06:47:27.858663 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.285557 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jszjx"] Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.285763 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jszjx" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="registry-server" containerID="cri-o://0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa" gracePeriod=2 Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.355441 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.385996 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.632844 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.719189 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-utilities\") pod \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.719248 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cc72\" (UniqueName: \"kubernetes.io/projected/bcb481cb-7b55-4540-9e64-44a893c3d3f7-kube-api-access-6cc72\") pod \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.720016 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-utilities" (OuterVolumeSpecName: "utilities") pod "bcb481cb-7b55-4540-9e64-44a893c3d3f7" (UID: "bcb481cb-7b55-4540-9e64-44a893c3d3f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.720583 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-catalog-content\") pod \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\" (UID: \"bcb481cb-7b55-4540-9e64-44a893c3d3f7\") " Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.720953 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.724764 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb481cb-7b55-4540-9e64-44a893c3d3f7-kube-api-access-6cc72" (OuterVolumeSpecName: "kube-api-access-6cc72") pod "bcb481cb-7b55-4540-9e64-44a893c3d3f7" (UID: "bcb481cb-7b55-4540-9e64-44a893c3d3f7"). InnerVolumeSpecName "kube-api-access-6cc72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.766830 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb481cb-7b55-4540-9e64-44a893c3d3f7" (UID: "bcb481cb-7b55-4540-9e64-44a893c3d3f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.800250 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.822154 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb481cb-7b55-4540-9e64-44a893c3d3f7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.822193 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cc72\" (UniqueName: \"kubernetes.io/projected/bcb481cb-7b55-4540-9e64-44a893c3d3f7-kube-api-access-6cc72\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:28 crc kubenswrapper[4681]: I1123 06:47:28.828305 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.004355 4681 generic.go:334] "Generic (PLEG): container finished" podID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerID="0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa" exitCode=0 Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.004477 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jszjx" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.004528 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jszjx" event={"ID":"bcb481cb-7b55-4540-9e64-44a893c3d3f7","Type":"ContainerDied","Data":"0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa"} Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.004571 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jszjx" event={"ID":"bcb481cb-7b55-4540-9e64-44a893c3d3f7","Type":"ContainerDied","Data":"fc76537d0a3407048faae938465f5c9e1be3ba78acfe754004026f598a8f715e"} Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.004589 4681 scope.go:117] "RemoveContainer" containerID="0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.018546 4681 scope.go:117] "RemoveContainer" containerID="48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.033949 4681 scope.go:117] "RemoveContainer" containerID="83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.037774 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jszjx"] Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.042849 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jszjx"] Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.048244 4681 scope.go:117] "RemoveContainer" containerID="0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa" Nov 23 06:47:29 crc kubenswrapper[4681]: E1123 06:47:29.048755 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa\": container with ID starting with 0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa not found: ID does not exist" containerID="0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.048787 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa"} err="failed to get container status \"0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa\": rpc error: code = NotFound desc = could not find container \"0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa\": container with ID starting with 0e1ad690a461746e50b116a647dc9b3b4dfc329409b294966dbbfd9f516c2cfa not found: ID does not exist" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.048829 4681 scope.go:117] "RemoveContainer" containerID="48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119" Nov 23 06:47:29 crc kubenswrapper[4681]: E1123 06:47:29.049250 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119\": container with ID starting with 48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119 not found: ID does not exist" containerID="48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.049364 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119"} err="failed to get container status \"48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119\": rpc error: code = NotFound desc = could not find container \"48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119\": container with ID starting with 48d7e716be44d80f85b98ebb86bdb0a72e95360f33a75c05029334b1e1201119 not found: ID does not exist" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.049486 4681 scope.go:117] "RemoveContainer" containerID="83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0" Nov 23 06:47:29 crc kubenswrapper[4681]: E1123 06:47:29.049854 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0\": container with ID starting with 83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0 not found: ID does not exist" containerID="83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.049875 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0"} err="failed to get container status \"83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0\": rpc error: code = NotFound desc = could not find container \"83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0\": container with ID starting with 83bad7d0938857c5d44af3ec208da5d0d1f7351b296af17410eced27c3de10f0 not found: ID does not exist" Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.249319 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cq2gd"] Nov 23 06:47:29 crc kubenswrapper[4681]: I1123 06:47:29.261105 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" path="/var/lib/kubelet/pods/bcb481cb-7b55-4540-9e64-44a893c3d3f7/volumes" Nov 23 06:47:30 crc kubenswrapper[4681]: I1123 06:47:30.009447 4681 generic.go:334] "Generic (PLEG): container finished" podID="d106e4dc-f7ce-4270-9229-573ec5586711" containerID="23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a" exitCode=0 Nov 23 06:47:30 crc kubenswrapper[4681]: I1123 06:47:30.009482 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmqjr" event={"ID":"d106e4dc-f7ce-4270-9229-573ec5586711","Type":"ContainerDied","Data":"23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a"} Nov 23 06:47:30 crc kubenswrapper[4681]: I1123 06:47:30.174453 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:30 crc kubenswrapper[4681]: I1123 06:47:30.686821 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbdhw"] Nov 23 06:47:30 crc kubenswrapper[4681]: I1123 06:47:30.687336 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bbdhw" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="registry-server" containerID="cri-o://e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b" gracePeriod=2 Nov 23 06:47:30 crc kubenswrapper[4681]: I1123 06:47:30.985637 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.018100 4681 generic.go:334] "Generic (PLEG): container finished" podID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerID="e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b" exitCode=0 Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.018174 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbdhw" event={"ID":"233c1d06-f0dd-46c4-8b90-e213255bf126","Type":"ContainerDied","Data":"e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b"} Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.018219 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbdhw" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.018993 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbdhw" event={"ID":"233c1d06-f0dd-46c4-8b90-e213255bf126","Type":"ContainerDied","Data":"e04a41d8262b73c3fe1f989af6def92f58dbc93d66044270a213da9c5532f7c8"} Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.019018 4681 scope.go:117] "RemoveContainer" containerID="e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.021674 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmqjr" event={"ID":"d106e4dc-f7ce-4270-9229-573ec5586711","Type":"ContainerStarted","Data":"a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418"} Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.035423 4681 scope.go:117] "RemoveContainer" containerID="99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.046625 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-utilities\") pod \"233c1d06-f0dd-46c4-8b90-e213255bf126\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.046752 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-catalog-content\") pod \"233c1d06-f0dd-46c4-8b90-e213255bf126\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.046790 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrs9n\" (UniqueName: \"kubernetes.io/projected/233c1d06-f0dd-46c4-8b90-e213255bf126-kube-api-access-vrs9n\") pod \"233c1d06-f0dd-46c4-8b90-e213255bf126\" (UID: \"233c1d06-f0dd-46c4-8b90-e213255bf126\") " Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.047577 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-utilities" (OuterVolumeSpecName: "utilities") pod "233c1d06-f0dd-46c4-8b90-e213255bf126" (UID: "233c1d06-f0dd-46c4-8b90-e213255bf126"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.053354 4681 scope.go:117] "RemoveContainer" containerID="cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.055062 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/233c1d06-f0dd-46c4-8b90-e213255bf126-kube-api-access-vrs9n" (OuterVolumeSpecName: "kube-api-access-vrs9n") pod "233c1d06-f0dd-46c4-8b90-e213255bf126" (UID: "233c1d06-f0dd-46c4-8b90-e213255bf126"). InnerVolumeSpecName "kube-api-access-vrs9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.055657 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fmqjr" podStartSLOduration=4.038056219 podStartE2EDuration="45.055642492s" podCreationTimestamp="2025-11-23 06:46:46 +0000 UTC" firstStartedPulling="2025-11-23 06:46:49.495237843 +0000 UTC m=+146.564747080" lastFinishedPulling="2025-11-23 06:47:30.512824115 +0000 UTC m=+187.582333353" observedRunningTime="2025-11-23 06:47:31.051269533 +0000 UTC m=+188.120778770" watchObservedRunningTime="2025-11-23 06:47:31.055642492 +0000 UTC m=+188.125151728" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.066452 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "233c1d06-f0dd-46c4-8b90-e213255bf126" (UID: "233c1d06-f0dd-46c4-8b90-e213255bf126"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.069645 4681 scope.go:117] "RemoveContainer" containerID="e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b" Nov 23 06:47:31 crc kubenswrapper[4681]: E1123 06:47:31.070061 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b\": container with ID starting with e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b not found: ID does not exist" containerID="e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.070094 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b"} err="failed to get container status \"e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b\": rpc error: code = NotFound desc = could not find container \"e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b\": container with ID starting with e580cfba4d6786d76a31a69e9ad022f13078a0c323a3a19b6027cb26ade26f6b not found: ID does not exist" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.070120 4681 scope.go:117] "RemoveContainer" containerID="99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913" Nov 23 06:47:31 crc kubenswrapper[4681]: E1123 06:47:31.070550 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913\": container with ID starting with 99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913 not found: ID does not exist" containerID="99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.070655 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913"} err="failed to get container status \"99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913\": rpc error: code = NotFound desc = could not find container \"99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913\": container with ID starting with 99acd1f3bdbbc2d156107505695b93ccde88579558efc0e3ac98f42cec8d9913 not found: ID does not exist" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.070753 4681 scope.go:117] "RemoveContainer" containerID="cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2" Nov 23 06:47:31 crc kubenswrapper[4681]: E1123 06:47:31.071052 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2\": container with ID starting with cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2 not found: ID does not exist" containerID="cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.071078 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2"} err="failed to get container status \"cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2\": rpc error: code = NotFound desc = could not find container \"cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2\": container with ID starting with cde279f3c43516f56b800e56bdb390f8083aa9beed1a78f97eb84de3f3809ff2 not found: ID does not exist" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.148406 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.148450 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233c1d06-f0dd-46c4-8b90-e213255bf126-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.148488 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrs9n\" (UniqueName: \"kubernetes.io/projected/233c1d06-f0dd-46c4-8b90-e213255bf126-kube-api-access-vrs9n\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.364172 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbdhw"] Nov 23 06:47:31 crc kubenswrapper[4681]: I1123 06:47:31.367230 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbdhw"] Nov 23 06:47:32 crc kubenswrapper[4681]: I1123 06:47:32.028901 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzkhc" event={"ID":"bd9b9442-5d36-4b7c-bc39-d403156b0c66","Type":"ContainerStarted","Data":"00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe"} Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.035543 4681 generic.go:334] "Generic (PLEG): container finished" podID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerID="00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe" exitCode=0 Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.035799 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzkhc" event={"ID":"bd9b9442-5d36-4b7c-bc39-d403156b0c66","Type":"ContainerDied","Data":"00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe"} Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.087052 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2m4k"] Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.087553 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2m4k" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="registry-server" containerID="cri-o://40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75" gracePeriod=2 Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.259614 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" path="/var/lib/kubelet/pods/233c1d06-f0dd-46c4-8b90-e213255bf126/volumes" Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.398893 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.475774 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-utilities\") pod \"8057fec0-6964-4c2a-9c64-79373dd7eb06\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.475846 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-catalog-content\") pod \"8057fec0-6964-4c2a-9c64-79373dd7eb06\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.475911 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdvx6\" (UniqueName: \"kubernetes.io/projected/8057fec0-6964-4c2a-9c64-79373dd7eb06-kube-api-access-jdvx6\") pod \"8057fec0-6964-4c2a-9c64-79373dd7eb06\" (UID: \"8057fec0-6964-4c2a-9c64-79373dd7eb06\") " Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.476589 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-utilities" (OuterVolumeSpecName: "utilities") pod "8057fec0-6964-4c2a-9c64-79373dd7eb06" (UID: "8057fec0-6964-4c2a-9c64-79373dd7eb06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.476868 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.479616 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8057fec0-6964-4c2a-9c64-79373dd7eb06-kube-api-access-jdvx6" (OuterVolumeSpecName: "kube-api-access-jdvx6") pod "8057fec0-6964-4c2a-9c64-79373dd7eb06" (UID: "8057fec0-6964-4c2a-9c64-79373dd7eb06"). InnerVolumeSpecName "kube-api-access-jdvx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.541392 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8057fec0-6964-4c2a-9c64-79373dd7eb06" (UID: "8057fec0-6964-4c2a-9c64-79373dd7eb06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.579075 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8057fec0-6964-4c2a-9c64-79373dd7eb06-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:33 crc kubenswrapper[4681]: I1123 06:47:33.579130 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdvx6\" (UniqueName: \"kubernetes.io/projected/8057fec0-6964-4c2a-9c64-79373dd7eb06-kube-api-access-jdvx6\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.042872 4681 generic.go:334] "Generic (PLEG): container finished" podID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerID="40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75" exitCode=0 Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.042961 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2m4k" event={"ID":"8057fec0-6964-4c2a-9c64-79373dd7eb06","Type":"ContainerDied","Data":"40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75"} Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.043024 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2m4k" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.043280 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2m4k" event={"ID":"8057fec0-6964-4c2a-9c64-79373dd7eb06","Type":"ContainerDied","Data":"9d50829e55e6485e71edb59c693bd5fae63d32181e56f8e72e552fd32c912530"} Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.043307 4681 scope.go:117] "RemoveContainer" containerID="40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.045139 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzkhc" event={"ID":"bd9b9442-5d36-4b7c-bc39-d403156b0c66","Type":"ContainerStarted","Data":"51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef"} Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.059960 4681 scope.go:117] "RemoveContainer" containerID="52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.062373 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xzkhc" podStartSLOduration=3.886623979 podStartE2EDuration="49.062347674s" podCreationTimestamp="2025-11-23 06:46:45 +0000 UTC" firstStartedPulling="2025-11-23 06:46:48.442491226 +0000 UTC m=+145.512000463" lastFinishedPulling="2025-11-23 06:47:33.61821492 +0000 UTC m=+190.687724158" observedRunningTime="2025-11-23 06:47:34.058440143 +0000 UTC m=+191.127949381" watchObservedRunningTime="2025-11-23 06:47:34.062347674 +0000 UTC m=+191.131856910" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.074924 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2m4k"] Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.076878 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2m4k"] Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.088659 4681 scope.go:117] "RemoveContainer" containerID="152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.101682 4681 scope.go:117] "RemoveContainer" containerID="40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75" Nov 23 06:47:34 crc kubenswrapper[4681]: E1123 06:47:34.102042 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75\": container with ID starting with 40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75 not found: ID does not exist" containerID="40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.102080 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75"} err="failed to get container status \"40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75\": rpc error: code = NotFound desc = could not find container \"40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75\": container with ID starting with 40f4aa3a2b173e80d0539d1ba75c3f6d80ed5c48cfe623889f32a0029432fa75 not found: ID does not exist" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.102108 4681 scope.go:117] "RemoveContainer" containerID="52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f" Nov 23 06:47:34 crc kubenswrapper[4681]: E1123 06:47:34.102502 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f\": container with ID starting with 52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f not found: ID does not exist" containerID="52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.102533 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f"} err="failed to get container status \"52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f\": rpc error: code = NotFound desc = could not find container \"52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f\": container with ID starting with 52f30520944f8f50e439530f3c04a4c53fa6c82c8c38e9e2420e50473467782f not found: ID does not exist" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.102556 4681 scope.go:117] "RemoveContainer" containerID="152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433" Nov 23 06:47:34 crc kubenswrapper[4681]: E1123 06:47:34.102962 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433\": container with ID starting with 152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433 not found: ID does not exist" containerID="152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433" Nov 23 06:47:34 crc kubenswrapper[4681]: I1123 06:47:34.103015 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433"} err="failed to get container status \"152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433\": rpc error: code = NotFound desc = could not find container \"152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433\": container with ID starting with 152b76de0c393c0b87e06563e90289ca3782cf664ea218e13448e68bfe8d8433 not found: ID does not exist" Nov 23 06:47:35 crc kubenswrapper[4681]: I1123 06:47:35.257451 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" path="/var/lib/kubelet/pods/8057fec0-6964-4c2a-9c64-79373dd7eb06/volumes" Nov 23 06:47:35 crc kubenswrapper[4681]: I1123 06:47:35.886094 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:47:35 crc kubenswrapper[4681]: I1123 06:47:35.886408 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:47:35 crc kubenswrapper[4681]: I1123 06:47:35.919216 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:47:37 crc kubenswrapper[4681]: I1123 06:47:37.354476 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:47:37 crc kubenswrapper[4681]: I1123 06:47:37.354544 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:47:37 crc kubenswrapper[4681]: I1123 06:47:37.385232 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:47:38 crc kubenswrapper[4681]: I1123 06:47:38.093022 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:47:42 crc kubenswrapper[4681]: I1123 06:47:42.295998 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:47:42 crc kubenswrapper[4681]: I1123 06:47:42.296258 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:47:45 crc kubenswrapper[4681]: I1123 06:47:45.914968 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.422743 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzkhc"] Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.423302 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xzkhc" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="registry-server" containerID="cri-o://51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef" gracePeriod=2 Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.706317 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.721195 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-utilities\") pod \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.721337 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdcfs\" (UniqueName: \"kubernetes.io/projected/bd9b9442-5d36-4b7c-bc39-d403156b0c66-kube-api-access-hdcfs\") pod \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.721450 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-catalog-content\") pod \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\" (UID: \"bd9b9442-5d36-4b7c-bc39-d403156b0c66\") " Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.723174 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-utilities" (OuterVolumeSpecName: "utilities") pod "bd9b9442-5d36-4b7c-bc39-d403156b0c66" (UID: "bd9b9442-5d36-4b7c-bc39-d403156b0c66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.729214 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9b9442-5d36-4b7c-bc39-d403156b0c66-kube-api-access-hdcfs" (OuterVolumeSpecName: "kube-api-access-hdcfs") pod "bd9b9442-5d36-4b7c-bc39-d403156b0c66" (UID: "bd9b9442-5d36-4b7c-bc39-d403156b0c66"). InnerVolumeSpecName "kube-api-access-hdcfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.763592 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd9b9442-5d36-4b7c-bc39-d403156b0c66" (UID: "bd9b9442-5d36-4b7c-bc39-d403156b0c66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.822756 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdcfs\" (UniqueName: \"kubernetes.io/projected/bd9b9442-5d36-4b7c-bc39-d403156b0c66-kube-api-access-hdcfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.822783 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:46 crc kubenswrapper[4681]: I1123 06:47:46.822793 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9b9442-5d36-4b7c-bc39-d403156b0c66-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.117136 4681 generic.go:334] "Generic (PLEG): container finished" podID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerID="51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef" exitCode=0 Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.117344 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzkhc" event={"ID":"bd9b9442-5d36-4b7c-bc39-d403156b0c66","Type":"ContainerDied","Data":"51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef"} Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.117419 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzkhc" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.117501 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzkhc" event={"ID":"bd9b9442-5d36-4b7c-bc39-d403156b0c66","Type":"ContainerDied","Data":"a69ca95cbea9e4131c524bc1c29e2399137d76b60c43078fa4d187d127a55ca7"} Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.117526 4681 scope.go:117] "RemoveContainer" containerID="51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.131632 4681 scope.go:117] "RemoveContainer" containerID="00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.139275 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzkhc"] Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.149555 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xzkhc"] Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.153396 4681 scope.go:117] "RemoveContainer" containerID="4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.165714 4681 scope.go:117] "RemoveContainer" containerID="51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef" Nov 23 06:47:47 crc kubenswrapper[4681]: E1123 06:47:47.166060 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef\": container with ID starting with 51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef not found: ID does not exist" containerID="51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.166106 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef"} err="failed to get container status \"51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef\": rpc error: code = NotFound desc = could not find container \"51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef\": container with ID starting with 51ee8e619108dc75eb5d3e3e1a9ce0a54cbb90379cd74ab08ffba7ef27c52bef not found: ID does not exist" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.166141 4681 scope.go:117] "RemoveContainer" containerID="00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe" Nov 23 06:47:47 crc kubenswrapper[4681]: E1123 06:47:47.166481 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe\": container with ID starting with 00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe not found: ID does not exist" containerID="00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.166516 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe"} err="failed to get container status \"00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe\": rpc error: code = NotFound desc = could not find container \"00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe\": container with ID starting with 00fccf799cb6b2ecbdaa5cfaa2dd5e119ab07196ab5275d921129ef00959a6fe not found: ID does not exist" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.166544 4681 scope.go:117] "RemoveContainer" containerID="4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5" Nov 23 06:47:47 crc kubenswrapper[4681]: E1123 06:47:47.166915 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5\": container with ID starting with 4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5 not found: ID does not exist" containerID="4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.166965 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5"} err="failed to get container status \"4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5\": rpc error: code = NotFound desc = could not find container \"4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5\": container with ID starting with 4327667e4ddb2f2f38b8dc29550195981fc5decdc672540a57c1b752b035daa5 not found: ID does not exist" Nov 23 06:47:47 crc kubenswrapper[4681]: I1123 06:47:47.257143 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" path="/var/lib/kubelet/pods/bd9b9442-5d36-4b7c-bc39-d403156b0c66/volumes" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.270626 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" podUID="01287236-92c0-4946-918f-bd641d4d5435" containerName="oauth-openshift" containerID="cri-o://02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c" gracePeriod=15 Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.556710 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709340 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-login\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709406 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01287236-92c0-4946-918f-bd641d4d5435-audit-dir\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709444 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-router-certs\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709479 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-cliconfig\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709498 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01287236-92c0-4946-918f-bd641d4d5435-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709523 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-trusted-ca-bundle\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709546 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-provider-selection\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709614 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-audit-policies\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709659 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-idp-0-file-data\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709685 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-service-ca\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709747 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-session\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709773 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvg62\" (UniqueName: \"kubernetes.io/projected/01287236-92c0-4946-918f-bd641d4d5435-kube-api-access-kvg62\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709805 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-error\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-serving-cert\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.709840 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-ocp-branding-template\") pod \"01287236-92c0-4946-918f-bd641d4d5435\" (UID: \"01287236-92c0-4946-918f-bd641d4d5435\") " Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.710038 4681 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01287236-92c0-4946-918f-bd641d4d5435-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.710278 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.710258 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.710528 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.716213 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01287236-92c0-4946-918f-bd641d4d5435-kube-api-access-kvg62" (OuterVolumeSpecName: "kube-api-access-kvg62") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "kube-api-access-kvg62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.716224 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.716647 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.717364 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.717690 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.717884 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.718081 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.718423 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.718676 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.718857 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "01287236-92c0-4946-918f-bd641d4d5435" (UID: "01287236-92c0-4946-918f-bd641d4d5435"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811244 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811282 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811292 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811302 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811313 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811325 4681 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811335 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811345 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811355 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811364 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvg62\" (UniqueName: \"kubernetes.io/projected/01287236-92c0-4946-918f-bd641d4d5435-kube-api-access-kvg62\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811373 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811384 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:54 crc kubenswrapper[4681]: I1123 06:47:54.811392 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01287236-92c0-4946-918f-bd641d4d5435-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.151764 4681 generic.go:334] "Generic (PLEG): container finished" podID="01287236-92c0-4946-918f-bd641d4d5435" containerID="02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c" exitCode=0 Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.151815 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" event={"ID":"01287236-92c0-4946-918f-bd641d4d5435","Type":"ContainerDied","Data":"02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c"} Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.152063 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" event={"ID":"01287236-92c0-4946-918f-bd641d4d5435","Type":"ContainerDied","Data":"c2ac7456350fa68a846a796529a4c0fce002a58b1b3ac2565390e78cb891ae5f"} Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.151846 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cq2gd" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.152082 4681 scope.go:117] "RemoveContainer" containerID="02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.167275 4681 scope.go:117] "RemoveContainer" containerID="02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.167827 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c\": container with ID starting with 02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c not found: ID does not exist" containerID="02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.167998 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c"} err="failed to get container status \"02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c\": rpc error: code = NotFound desc = could not find container \"02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c\": container with ID starting with 02359d2b646f99870bbc17f25464f290575aacacc0b4ee3c5f21b1e99192a79c not found: ID does not exist" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.174224 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cq2gd"] Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.176409 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cq2gd"] Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.256798 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01287236-92c0-4946-918f-bd641d4d5435" path="/var/lib/kubelet/pods/01287236-92c0-4946-918f-bd641d4d5435/volumes" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574554 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss"] Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574836 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01287236-92c0-4946-918f-bd641d4d5435" containerName="oauth-openshift" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574850 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="01287236-92c0-4946-918f-bd641d4d5435" containerName="oauth-openshift" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574863 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574869 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574878 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574884 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574897 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574902 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574908 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574914 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574920 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574932 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574941 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574948 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574955 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574960 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="extract-content" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574972 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574978 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574985 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8f467b-ba3e-4db8-9b80-392e2d0ef58f" containerName="pruner" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.574990 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8f467b-ba3e-4db8-9b80-392e2d0ef58f" containerName="pruner" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.574997 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575002 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.575011 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575017 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.575025 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575031 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="extract-utilities" Nov 23 06:47:55 crc kubenswrapper[4681]: E1123 06:47:55.575037 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575043 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575135 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8057fec0-6964-4c2a-9c64-79373dd7eb06" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575146 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8f467b-ba3e-4db8-9b80-392e2d0ef58f" containerName="pruner" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575155 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9b9442-5d36-4b7c-bc39-d403156b0c66" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575165 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="01287236-92c0-4946-918f-bd641d4d5435" containerName="oauth-openshift" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575173 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="233c1d06-f0dd-46c4-8b90-e213255bf126" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575180 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb481cb-7b55-4540-9e64-44a893c3d3f7" containerName="registry-server" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.575623 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.577568 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.577744 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.580640 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.581022 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.581343 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.582270 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.582486 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.582597 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.582684 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.583991 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.584242 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.584332 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.586561 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss"] Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.587745 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.591060 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.595134 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620479 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620551 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620580 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/416e7577-b33c-4406-aae2-68effb4e54be-audit-dir\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620599 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620621 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620690 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620731 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-login\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620764 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620791 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-audit-policies\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620872 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-session\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620905 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-error\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.620983 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-service-ca\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.621006 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-router-certs\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.621034 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmp4x\" (UniqueName: \"kubernetes.io/projected/416e7577-b33c-4406-aae2-68effb4e54be-kube-api-access-jmp4x\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.721917 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-login\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.721965 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.721990 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-audit-policies\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722021 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-error\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722040 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-session\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722076 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-service-ca\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722095 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-router-certs\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722117 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmp4x\" (UniqueName: \"kubernetes.io/projected/416e7577-b33c-4406-aae2-68effb4e54be-kube-api-access-jmp4x\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722145 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722162 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/416e7577-b33c-4406-aae2-68effb4e54be-audit-dir\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722197 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722214 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.722237 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.723205 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-service-ca\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.723203 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-audit-policies\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.723318 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/416e7577-b33c-4406-aae2-68effb4e54be-audit-dir\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.723415 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.724477 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.726243 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.726251 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-error\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.726445 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.727341 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-router-certs\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.727745 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-template-login\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.728240 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-session\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.728307 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.728408 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/416e7577-b33c-4406-aae2-68effb4e54be-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.736543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmp4x\" (UniqueName: \"kubernetes.io/projected/416e7577-b33c-4406-aae2-68effb4e54be-kube-api-access-jmp4x\") pod \"oauth-openshift-77bfbb8d5b-pqrss\" (UID: \"416e7577-b33c-4406-aae2-68effb4e54be\") " pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:55 crc kubenswrapper[4681]: I1123 06:47:55.889157 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:56 crc kubenswrapper[4681]: I1123 06:47:56.041049 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss"] Nov 23 06:47:56 crc kubenswrapper[4681]: I1123 06:47:56.158964 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" event={"ID":"416e7577-b33c-4406-aae2-68effb4e54be","Type":"ContainerStarted","Data":"93a4a1d0703a4ad28d4aae32d4007d780e30961653fbf307d32dee0d000ffe50"} Nov 23 06:47:57 crc kubenswrapper[4681]: I1123 06:47:57.167211 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" event={"ID":"416e7577-b33c-4406-aae2-68effb4e54be","Type":"ContainerStarted","Data":"abbfd5679e29eb63581435e42c68f44cd679c2cbaf6836bf672a1319068cb470"} Nov 23 06:47:57 crc kubenswrapper[4681]: I1123 06:47:57.167701 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:57 crc kubenswrapper[4681]: I1123 06:47:57.173499 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" Nov 23 06:47:57 crc kubenswrapper[4681]: I1123 06:47:57.187041 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" podStartSLOduration=28.187015067 podStartE2EDuration="28.187015067s" podCreationTimestamp="2025-11-23 06:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:57.184168388 +0000 UTC m=+214.253677625" watchObservedRunningTime="2025-11-23 06:47:57.187015067 +0000 UTC m=+214.256524305" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.347680 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m56bk"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.348991 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m56bk" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="registry-server" containerID="cri-o://6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44" gracePeriod=30 Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.353309 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48jrc"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.353706 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-48jrc" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="registry-server" containerID="cri-o://32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc" gracePeriod=30 Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.378586 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5zj2"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.378785 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerName="marketplace-operator" containerID="cri-o://b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf" gracePeriod=30 Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.384203 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmqjr"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.384245 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqkpz"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.384485 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nqkpz" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="registry-server" containerID="cri-o://0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1" gracePeriod=30 Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.384678 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fmqjr" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="registry-server" containerID="cri-o://a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418" gracePeriod=30 Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.387475 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vcxlz"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.388155 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.397345 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vcxlz"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.436170 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-fmqjr" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="registry-server" probeResult="failure" output="" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.436377 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-fmqjr" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="registry-server" probeResult="failure" output="" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.551662 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjwv\" (UniqueName: \"kubernetes.io/projected/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-kube-api-access-ttjwv\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.551722 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.551742 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.652271 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.652312 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.652365 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttjwv\" (UniqueName: \"kubernetes.io/projected/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-kube-api-access-ttjwv\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.655857 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.664544 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.672398 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttjwv\" (UniqueName: \"kubernetes.io/projected/9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75-kube-api-access-ttjwv\") pod \"marketplace-operator-79b997595-vcxlz\" (UID: \"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75\") " pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.701561 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.707986 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.753596 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.755072 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-utilities\") pod \"d43c43f7-de50-40d4-8910-b502d1def095\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.755137 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkpb7\" (UniqueName: \"kubernetes.io/projected/d43c43f7-de50-40d4-8910-b502d1def095-kube-api-access-mkpb7\") pod \"d43c43f7-de50-40d4-8910-b502d1def095\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.755203 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-catalog-content\") pod \"d43c43f7-de50-40d4-8910-b502d1def095\" (UID: \"d43c43f7-de50-40d4-8910-b502d1def095\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.756865 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-utilities" (OuterVolumeSpecName: "utilities") pod "d43c43f7-de50-40d4-8910-b502d1def095" (UID: "d43c43f7-de50-40d4-8910-b502d1def095"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.757954 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.761748 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d43c43f7-de50-40d4-8910-b502d1def095-kube-api-access-mkpb7" (OuterVolumeSpecName: "kube-api-access-mkpb7") pod "d43c43f7-de50-40d4-8910-b502d1def095" (UID: "d43c43f7-de50-40d4-8910-b502d1def095"). InnerVolumeSpecName "kube-api-access-mkpb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.777985 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.828035 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.836617 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d43c43f7-de50-40d4-8910-b502d1def095" (UID: "d43c43f7-de50-40d4-8910-b502d1def095"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860644 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdj9b\" (UniqueName: \"kubernetes.io/projected/fdfd882e-f012-452f-8709-32ddb2ddb019-kube-api-access-tdj9b\") pod \"fdfd882e-f012-452f-8709-32ddb2ddb019\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860686 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca\") pod \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860703 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-catalog-content\") pod \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860721 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-utilities\") pod \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860749 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics\") pod \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860767 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-utilities\") pod \"d106e4dc-f7ce-4270-9229-573ec5586711\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860796 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-utilities\") pod \"fdfd882e-f012-452f-8709-32ddb2ddb019\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q28s\" (UniqueName: \"kubernetes.io/projected/d106e4dc-f7ce-4270-9229-573ec5586711-kube-api-access-6q28s\") pod \"d106e4dc-f7ce-4270-9229-573ec5586711\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860845 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzxtt\" (UniqueName: \"kubernetes.io/projected/dae5706a-d59e-40ba-9546-7bed3f4f77aa-kube-api-access-tzxtt\") pod \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\" (UID: \"dae5706a-d59e-40ba-9546-7bed3f4f77aa\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860864 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4svcl\" (UniqueName: \"kubernetes.io/projected/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-kube-api-access-4svcl\") pod \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\" (UID: \"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860888 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-catalog-content\") pod \"fdfd882e-f012-452f-8709-32ddb2ddb019\" (UID: \"fdfd882e-f012-452f-8709-32ddb2ddb019\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.860903 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-catalog-content\") pod \"d106e4dc-f7ce-4270-9229-573ec5586711\" (UID: \"d106e4dc-f7ce-4270-9229-573ec5586711\") " Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.861031 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkpb7\" (UniqueName: \"kubernetes.io/projected/d43c43f7-de50-40d4-8910-b502d1def095-kube-api-access-mkpb7\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.861042 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.861052 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d43c43f7-de50-40d4-8910-b502d1def095-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.862022 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-utilities" (OuterVolumeSpecName: "utilities") pod "d106e4dc-f7ce-4270-9229-573ec5586711" (UID: "d106e4dc-f7ce-4270-9229-573ec5586711"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.862611 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-utilities" (OuterVolumeSpecName: "utilities") pod "b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" (UID: "b61682f3-e3c0-4fda-9c80-52f67f9ee9c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.864139 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-utilities" (OuterVolumeSpecName: "utilities") pod "fdfd882e-f012-452f-8709-32ddb2ddb019" (UID: "fdfd882e-f012-452f-8709-32ddb2ddb019"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.866958 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae5706a-d59e-40ba-9546-7bed3f4f77aa-kube-api-access-tzxtt" (OuterVolumeSpecName: "kube-api-access-tzxtt") pod "dae5706a-d59e-40ba-9546-7bed3f4f77aa" (UID: "dae5706a-d59e-40ba-9546-7bed3f4f77aa"). InnerVolumeSpecName "kube-api-access-tzxtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.868820 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-kube-api-access-4svcl" (OuterVolumeSpecName: "kube-api-access-4svcl") pod "b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" (UID: "b61682f3-e3c0-4fda-9c80-52f67f9ee9c9"). InnerVolumeSpecName "kube-api-access-4svcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.869621 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfd882e-f012-452f-8709-32ddb2ddb019-kube-api-access-tdj9b" (OuterVolumeSpecName: "kube-api-access-tdj9b") pod "fdfd882e-f012-452f-8709-32ddb2ddb019" (UID: "fdfd882e-f012-452f-8709-32ddb2ddb019"). InnerVolumeSpecName "kube-api-access-tdj9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.870257 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dae5706a-d59e-40ba-9546-7bed3f4f77aa" (UID: "dae5706a-d59e-40ba-9546-7bed3f4f77aa"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.871075 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dae5706a-d59e-40ba-9546-7bed3f4f77aa" (UID: "dae5706a-d59e-40ba-9546-7bed3f4f77aa"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.871387 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d106e4dc-f7ce-4270-9229-573ec5586711-kube-api-access-6q28s" (OuterVolumeSpecName: "kube-api-access-6q28s") pod "d106e4dc-f7ce-4270-9229-573ec5586711" (UID: "d106e4dc-f7ce-4270-9229-573ec5586711"). InnerVolumeSpecName "kube-api-access-6q28s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.898485 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d106e4dc-f7ce-4270-9229-573ec5586711" (UID: "d106e4dc-f7ce-4270-9229-573ec5586711"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.912564 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" (UID: "b61682f3-e3c0-4fda-9c80-52f67f9ee9c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.934198 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vcxlz"] Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965197 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965227 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965237 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965245 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q28s\" (UniqueName: \"kubernetes.io/projected/d106e4dc-f7ce-4270-9229-573ec5586711-kube-api-access-6q28s\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965254 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzxtt\" (UniqueName: \"kubernetes.io/projected/dae5706a-d59e-40ba-9546-7bed3f4f77aa-kube-api-access-tzxtt\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965262 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4svcl\" (UniqueName: \"kubernetes.io/projected/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-kube-api-access-4svcl\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965281 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d106e4dc-f7ce-4270-9229-573ec5586711-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965289 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdj9b\" (UniqueName: \"kubernetes.io/projected/fdfd882e-f012-452f-8709-32ddb2ddb019-kube-api-access-tdj9b\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965297 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dae5706a-d59e-40ba-9546-7bed3f4f77aa-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965304 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.965312 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:07 crc kubenswrapper[4681]: I1123 06:48:07.975520 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdfd882e-f012-452f-8709-32ddb2ddb019" (UID: "fdfd882e-f012-452f-8709-32ddb2ddb019"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.065604 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd882e-f012-452f-8709-32ddb2ddb019-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.212843 4681 generic.go:334] "Generic (PLEG): container finished" podID="d43c43f7-de50-40d4-8910-b502d1def095" containerID="6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44" exitCode=0 Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.212915 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m56bk" event={"ID":"d43c43f7-de50-40d4-8910-b502d1def095","Type":"ContainerDied","Data":"6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.213146 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m56bk" event={"ID":"d43c43f7-de50-40d4-8910-b502d1def095","Type":"ContainerDied","Data":"61d7e560a60cbdae37710cd9ca9fdc30d2980ace0a168e65de5cf07340fb1d90"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.213168 4681 scope.go:117] "RemoveContainer" containerID="6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.213041 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m56bk" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.214263 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" event={"ID":"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75","Type":"ContainerStarted","Data":"a7cef2f45114f19f0926639648b51837cbd21af0015de820221b79d39e78f79d"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.214316 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" event={"ID":"9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75","Type":"ContainerStarted","Data":"3d2844c4a05861ea6871b180d20819354692e583cd1066a486405c1af9abc055"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.214777 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.215728 4681 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vcxlz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.215764 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" podUID="9c3dcfb8-ee4f-4793-91c5-c7cfb2e6fa75" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.217762 4681 generic.go:334] "Generic (PLEG): container finished" podID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerID="b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf" exitCode=0 Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.217879 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.217910 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" event={"ID":"dae5706a-d59e-40ba-9546-7bed3f4f77aa","Type":"ContainerDied","Data":"b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.217934 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5zj2" event={"ID":"dae5706a-d59e-40ba-9546-7bed3f4f77aa","Type":"ContainerDied","Data":"35b27457d5b4e697d57a5dc872b6fc07d1b2840769712a3fe44bef9d86db17a2"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.222496 4681 generic.go:334] "Generic (PLEG): container finished" podID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerID="32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc" exitCode=0 Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.222565 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48jrc" event={"ID":"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9","Type":"ContainerDied","Data":"32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.222723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48jrc" event={"ID":"b61682f3-e3c0-4fda-9c80-52f67f9ee9c9","Type":"ContainerDied","Data":"0e588b0fb0ea80685d1d5adff29acbef301ffe87a3bd1c50a0a3973dcbdcb875"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.222567 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48jrc" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.224447 4681 generic.go:334] "Generic (PLEG): container finished" podID="d106e4dc-f7ce-4270-9229-573ec5586711" containerID="a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418" exitCode=0 Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.224503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmqjr" event={"ID":"d106e4dc-f7ce-4270-9229-573ec5586711","Type":"ContainerDied","Data":"a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.224523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmqjr" event={"ID":"d106e4dc-f7ce-4270-9229-573ec5586711","Type":"ContainerDied","Data":"fe2b9bfce3a14abd90525ebf325704a99bcc0161ec9b23b98819863b0bd93dba"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.224608 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmqjr" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.225248 4681 scope.go:117] "RemoveContainer" containerID="c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.227499 4681 generic.go:334] "Generic (PLEG): container finished" podID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerID="0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1" exitCode=0 Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.227558 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqkpz" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.227570 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqkpz" event={"ID":"fdfd882e-f012-452f-8709-32ddb2ddb019","Type":"ContainerDied","Data":"0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.227591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqkpz" event={"ID":"fdfd882e-f012-452f-8709-32ddb2ddb019","Type":"ContainerDied","Data":"b84442fa30f0f19a732194ceb049ec68e8556b1625aa78533b6348d9f04b201e"} Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.234266 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" podStartSLOduration=1.234258058 podStartE2EDuration="1.234258058s" podCreationTimestamp="2025-11-23 06:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:48:08.233509838 +0000 UTC m=+225.303019075" watchObservedRunningTime="2025-11-23 06:48:08.234258058 +0000 UTC m=+225.303767295" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.256401 4681 scope.go:117] "RemoveContainer" containerID="bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.268735 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5zj2"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.271372 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5zj2"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.275429 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m56bk"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.277521 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m56bk"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.280248 4681 scope.go:117] "RemoveContainer" containerID="6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.280594 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44\": container with ID starting with 6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44 not found: ID does not exist" containerID="6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.280640 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44"} err="failed to get container status \"6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44\": rpc error: code = NotFound desc = could not find container \"6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44\": container with ID starting with 6fd412850fbd663191b61fe9452feb097b192741efddad3356d3ebbdcb7e1d44 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.280663 4681 scope.go:117] "RemoveContainer" containerID="c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.280991 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5\": container with ID starting with c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5 not found: ID does not exist" containerID="c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.281025 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5"} err="failed to get container status \"c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5\": rpc error: code = NotFound desc = could not find container \"c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5\": container with ID starting with c748f0b91d0f86616088fe2030ad75bbf9b85a2bffed5f8d8a954d7538aa3be5 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.281050 4681 scope.go:117] "RemoveContainer" containerID="bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.281526 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7\": container with ID starting with bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7 not found: ID does not exist" containerID="bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.281632 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7"} err="failed to get container status \"bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7\": rpc error: code = NotFound desc = could not find container \"bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7\": container with ID starting with bb00de579f3abfda3e67c0eb12f81117dc1aac204ac568a514c7c1e3176ff8c7 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.281653 4681 scope.go:117] "RemoveContainer" containerID="b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.292122 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmqjr"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.294647 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmqjr"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.295608 4681 scope.go:117] "RemoveContainer" containerID="b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.296013 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf\": container with ID starting with b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf not found: ID does not exist" containerID="b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.296095 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf"} err="failed to get container status \"b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf\": rpc error: code = NotFound desc = could not find container \"b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf\": container with ID starting with b8c2fc4954ced80193ea9f97a670ae5a663f6f95d6ef9170e53f12e58a44dcdf not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.296127 4681 scope.go:117] "RemoveContainer" containerID="32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.310003 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48jrc"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.312583 4681 scope.go:117] "RemoveContainer" containerID="feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.316632 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-48jrc"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.318916 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqkpz"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.321164 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nqkpz"] Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.325927 4681 scope.go:117] "RemoveContainer" containerID="f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.336899 4681 scope.go:117] "RemoveContainer" containerID="32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.337184 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc\": container with ID starting with 32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc not found: ID does not exist" containerID="32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.337222 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc"} err="failed to get container status \"32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc\": rpc error: code = NotFound desc = could not find container \"32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc\": container with ID starting with 32875ed3ae69df7080fa0fa2a95fbaf161a397ac5de4848f351f6e105bd735dc not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.337251 4681 scope.go:117] "RemoveContainer" containerID="feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.337515 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9\": container with ID starting with feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9 not found: ID does not exist" containerID="feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.337544 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9"} err="failed to get container status \"feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9\": rpc error: code = NotFound desc = could not find container \"feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9\": container with ID starting with feef89f4db6b8047ec1ce790dabfd7e51a7d839522b5eb049883e96c143860d9 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.337565 4681 scope.go:117] "RemoveContainer" containerID="f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.337802 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9\": container with ID starting with f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9 not found: ID does not exist" containerID="f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.337832 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9"} err="failed to get container status \"f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9\": rpc error: code = NotFound desc = could not find container \"f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9\": container with ID starting with f3986b9c081b0f21c8aba4b2abdc7abf5c4d45687b0be526ebf77304cb429cb9 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.337850 4681 scope.go:117] "RemoveContainer" containerID="a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.352021 4681 scope.go:117] "RemoveContainer" containerID="23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.361354 4681 scope.go:117] "RemoveContainer" containerID="1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.370916 4681 scope.go:117] "RemoveContainer" containerID="a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.371318 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418\": container with ID starting with a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418 not found: ID does not exist" containerID="a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.371346 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418"} err="failed to get container status \"a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418\": rpc error: code = NotFound desc = could not find container \"a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418\": container with ID starting with a2cfa21ad803c9f9dd13a8f184a1cb13c346020fac281906bdaaa6a3f563c418 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.371367 4681 scope.go:117] "RemoveContainer" containerID="23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.371917 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a\": container with ID starting with 23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a not found: ID does not exist" containerID="23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.371940 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a"} err="failed to get container status \"23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a\": rpc error: code = NotFound desc = could not find container \"23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a\": container with ID starting with 23f4498d17ca5ec16c42675cc8ab4a2bc8a996f0efb8d6248a96d90597f51d4a not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.371955 4681 scope.go:117] "RemoveContainer" containerID="1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.372191 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2\": container with ID starting with 1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2 not found: ID does not exist" containerID="1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.372219 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2"} err="failed to get container status \"1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2\": rpc error: code = NotFound desc = could not find container \"1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2\": container with ID starting with 1cb79ac6334ea823ea9514e5ece6bc0c68a3af5e3559c264c467a4abe21cf6d2 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.372240 4681 scope.go:117] "RemoveContainer" containerID="0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.382864 4681 scope.go:117] "RemoveContainer" containerID="01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.398108 4681 scope.go:117] "RemoveContainer" containerID="ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.415843 4681 scope.go:117] "RemoveContainer" containerID="0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.416233 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1\": container with ID starting with 0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1 not found: ID does not exist" containerID="0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.416261 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1"} err="failed to get container status \"0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1\": rpc error: code = NotFound desc = could not find container \"0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1\": container with ID starting with 0cedf9bbd44387af7469b8e604dbfe2bc4e6bd6a59c4b509d124c0f02cf685d1 not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.416277 4681 scope.go:117] "RemoveContainer" containerID="01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.416710 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b\": container with ID starting with 01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b not found: ID does not exist" containerID="01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.416746 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b"} err="failed to get container status \"01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b\": rpc error: code = NotFound desc = could not find container \"01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b\": container with ID starting with 01fae3d05805780ef133469a382ad4b57f52a2e4613a959f70f5f1b34dbd6a3b not found: ID does not exist" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.416792 4681 scope.go:117] "RemoveContainer" containerID="ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45" Nov 23 06:48:08 crc kubenswrapper[4681]: E1123 06:48:08.417744 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45\": container with ID starting with ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45 not found: ID does not exist" containerID="ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45" Nov 23 06:48:08 crc kubenswrapper[4681]: I1123 06:48:08.417770 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45"} err="failed to get container status \"ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45\": rpc error: code = NotFound desc = could not find container \"ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45\": container with ID starting with ca0873b032b1f4f0f4de85d4aceb23ca9c44d54ebd34a4e3a0a101652fcdea45 not found: ID does not exist" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.242762 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vcxlz" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.259583 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" path="/var/lib/kubelet/pods/b61682f3-e3c0-4fda-9c80-52f67f9ee9c9/volumes" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.260273 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" path="/var/lib/kubelet/pods/d106e4dc-f7ce-4270-9229-573ec5586711/volumes" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.260904 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d43c43f7-de50-40d4-8910-b502d1def095" path="/var/lib/kubelet/pods/d43c43f7-de50-40d4-8910-b502d1def095/volumes" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.261904 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" path="/var/lib/kubelet/pods/dae5706a-d59e-40ba-9546-7bed3f4f77aa/volumes" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.262389 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" path="/var/lib/kubelet/pods/fdfd882e-f012-452f-8709-32ddb2ddb019/volumes" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.555049 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pf4pb"] Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.555958 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.556058 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.556119 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.556165 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.556219 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.556487 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.556562 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.556612 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.556670 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.556717 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.556771 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.556817 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.556866 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.556909 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.556971 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557017 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.557067 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557112 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.557156 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557205 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="extract-content" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.557250 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557293 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.557354 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerName="marketplace-operator" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557405 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerName="marketplace-operator" Nov 23 06:48:09 crc kubenswrapper[4681]: E1123 06:48:09.557471 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557531 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="extract-utilities" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557668 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae5706a-d59e-40ba-9546-7bed3f4f77aa" containerName="marketplace-operator" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557740 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b61682f3-e3c0-4fda-9c80-52f67f9ee9c9" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557791 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfd882e-f012-452f-8709-32ddb2ddb019" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557844 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d106e4dc-f7ce-4270-9229-573ec5586711" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.557943 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d43c43f7-de50-40d4-8910-b502d1def095" containerName="registry-server" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.560291 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.562085 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.563580 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf4pb"] Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.581607 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02385b8b-028a-4af7-b56d-8c816d80a3b6-utilities\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.581644 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02385b8b-028a-4af7-b56d-8c816d80a3b6-catalog-content\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.581677 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47tcw\" (UniqueName: \"kubernetes.io/projected/02385b8b-028a-4af7-b56d-8c816d80a3b6-kube-api-access-47tcw\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.682300 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02385b8b-028a-4af7-b56d-8c816d80a3b6-utilities\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.682425 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02385b8b-028a-4af7-b56d-8c816d80a3b6-catalog-content\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.682528 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47tcw\" (UniqueName: \"kubernetes.io/projected/02385b8b-028a-4af7-b56d-8c816d80a3b6-kube-api-access-47tcw\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.682826 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02385b8b-028a-4af7-b56d-8c816d80a3b6-utilities\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.684613 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02385b8b-028a-4af7-b56d-8c816d80a3b6-catalog-content\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.696690 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47tcw\" (UniqueName: \"kubernetes.io/projected/02385b8b-028a-4af7-b56d-8c816d80a3b6-kube-api-access-47tcw\") pod \"certified-operators-pf4pb\" (UID: \"02385b8b-028a-4af7-b56d-8c816d80a3b6\") " pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.756982 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xs7np"] Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.757852 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.760515 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.762256 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs7np"] Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.783941 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-catalog-content\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.783972 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-utilities\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.784001 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468zz\" (UniqueName: \"kubernetes.io/projected/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-kube-api-access-468zz\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.878675 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.885370 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-catalog-content\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.885430 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-utilities\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.885482 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468zz\" (UniqueName: \"kubernetes.io/projected/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-kube-api-access-468zz\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.886264 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-catalog-content\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.886520 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-utilities\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:09 crc kubenswrapper[4681]: I1123 06:48:09.906823 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468zz\" (UniqueName: \"kubernetes.io/projected/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-kube-api-access-468zz\") pod \"community-operators-xs7np\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:10 crc kubenswrapper[4681]: I1123 06:48:10.072634 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:10 crc kubenswrapper[4681]: I1123 06:48:10.219876 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf4pb"] Nov 23 06:48:10 crc kubenswrapper[4681]: W1123 06:48:10.223187 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02385b8b_028a_4af7_b56d_8c816d80a3b6.slice/crio-64d215a2187383cf9929e71c1edce2577190a0646709f56bfb64d66f5118c1c4 WatchSource:0}: Error finding container 64d215a2187383cf9929e71c1edce2577190a0646709f56bfb64d66f5118c1c4: Status 404 returned error can't find the container with id 64d215a2187383cf9929e71c1edce2577190a0646709f56bfb64d66f5118c1c4 Nov 23 06:48:10 crc kubenswrapper[4681]: I1123 06:48:10.245796 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4pb" event={"ID":"02385b8b-028a-4af7-b56d-8c816d80a3b6","Type":"ContainerStarted","Data":"64d215a2187383cf9929e71c1edce2577190a0646709f56bfb64d66f5118c1c4"} Nov 23 06:48:10 crc kubenswrapper[4681]: I1123 06:48:10.411512 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs7np"] Nov 23 06:48:10 crc kubenswrapper[4681]: W1123 06:48:10.413614 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2cd2dfb_b72c_4fad_9a4d_13dd73dcbb39.slice/crio-0e27211a33208f6762f02e43d89b784638f15f31ef8a7b869a007065e8c1c578 WatchSource:0}: Error finding container 0e27211a33208f6762f02e43d89b784638f15f31ef8a7b869a007065e8c1c578: Status 404 returned error can't find the container with id 0e27211a33208f6762f02e43d89b784638f15f31ef8a7b869a007065e8c1c578 Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.254117 4681 generic.go:334] "Generic (PLEG): container finished" podID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerID="c8682d07698e6e870831970c8b69b68b675cdeaed5eb69f5e6afccee86a991c7" exitCode=0 Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.261206 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs7np" event={"ID":"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39","Type":"ContainerDied","Data":"c8682d07698e6e870831970c8b69b68b675cdeaed5eb69f5e6afccee86a991c7"} Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.261246 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs7np" event={"ID":"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39","Type":"ContainerStarted","Data":"0e27211a33208f6762f02e43d89b784638f15f31ef8a7b869a007065e8c1c578"} Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.262663 4681 generic.go:334] "Generic (PLEG): container finished" podID="02385b8b-028a-4af7-b56d-8c816d80a3b6" containerID="3b12cb2f1b00262f167a84092f2574dccf1e4f7072bd92cf916d48a014463959" exitCode=0 Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.262735 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4pb" event={"ID":"02385b8b-028a-4af7-b56d-8c816d80a3b6","Type":"ContainerDied","Data":"3b12cb2f1b00262f167a84092f2574dccf1e4f7072bd92cf916d48a014463959"} Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.953858 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hzq2t"] Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.957670 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.960782 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 06:48:11 crc kubenswrapper[4681]: I1123 06:48:11.965905 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzq2t"] Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.008124 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb2e180-6c70-447e-9c60-f62844cf5779-catalog-content\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.008209 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lns\" (UniqueName: \"kubernetes.io/projected/9fb2e180-6c70-447e-9c60-f62844cf5779-kube-api-access-n7lns\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.008232 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb2e180-6c70-447e-9c60-f62844cf5779-utilities\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.108837 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7lns\" (UniqueName: \"kubernetes.io/projected/9fb2e180-6c70-447e-9c60-f62844cf5779-kube-api-access-n7lns\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.109051 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb2e180-6c70-447e-9c60-f62844cf5779-utilities\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.109141 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb2e180-6c70-447e-9c60-f62844cf5779-catalog-content\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.109575 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb2e180-6c70-447e-9c60-f62844cf5779-catalog-content\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.109684 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb2e180-6c70-447e-9c60-f62844cf5779-utilities\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.123749 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7lns\" (UniqueName: \"kubernetes.io/projected/9fb2e180-6c70-447e-9c60-f62844cf5779-kube-api-access-n7lns\") pod \"redhat-operators-hzq2t\" (UID: \"9fb2e180-6c70-447e-9c60-f62844cf5779\") " pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.153006 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tz749"] Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.154000 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.157635 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.168031 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tz749"] Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.268187 4681 generic.go:334] "Generic (PLEG): container finished" podID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerID="39ca7fb3d78fc592819e641165de5f5670907ca18e3ecd9cd75a7faae7eedc80" exitCode=0 Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.268248 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs7np" event={"ID":"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39","Type":"ContainerDied","Data":"39ca7fb3d78fc592819e641165de5f5670907ca18e3ecd9cd75a7faae7eedc80"} Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.274229 4681 generic.go:334] "Generic (PLEG): container finished" podID="02385b8b-028a-4af7-b56d-8c816d80a3b6" containerID="688b4b433efac88e0c08329e665a381e31cf0645a7d74fecef31aafd2f7b929c" exitCode=0 Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.274281 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4pb" event={"ID":"02385b8b-028a-4af7-b56d-8c816d80a3b6","Type":"ContainerDied","Data":"688b4b433efac88e0c08329e665a381e31cf0645a7d74fecef31aafd2f7b929c"} Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.290121 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.295819 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.295862 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.295912 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.296380 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.296450 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25" gracePeriod=600 Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.312271 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-utilities\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.312312 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjrkp\" (UniqueName: \"kubernetes.io/projected/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-kube-api-access-xjrkp\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.312352 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-catalog-content\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.413377 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-catalog-content\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.413647 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-utilities\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.413672 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjrkp\" (UniqueName: \"kubernetes.io/projected/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-kube-api-access-xjrkp\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.413783 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-catalog-content\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.413996 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-utilities\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.427292 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjrkp\" (UniqueName: \"kubernetes.io/projected/3f343d03-be6b-4b7f-8313-1ed68abf5d7a-kube-api-access-xjrkp\") pod \"redhat-marketplace-tz749\" (UID: \"3f343d03-be6b-4b7f-8313-1ed68abf5d7a\") " pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.464283 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.628852 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzq2t"] Nov 23 06:48:12 crc kubenswrapper[4681]: I1123 06:48:12.672751 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tz749"] Nov 23 06:48:12 crc kubenswrapper[4681]: W1123 06:48:12.683689 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f343d03_be6b_4b7f_8313_1ed68abf5d7a.slice/crio-5656c74b2a0b76e6707050f6dcfd11bd5c8fe6e3ead1da69998e9f05650f359d WatchSource:0}: Error finding container 5656c74b2a0b76e6707050f6dcfd11bd5c8fe6e3ead1da69998e9f05650f359d: Status 404 returned error can't find the container with id 5656c74b2a0b76e6707050f6dcfd11bd5c8fe6e3ead1da69998e9f05650f359d Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.285130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs7np" event={"ID":"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39","Type":"ContainerStarted","Data":"faa3be77291c7a00e9a80e713312de5e94f13e6ae8396f0dec5ce80b7c857576"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.286722 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4pb" event={"ID":"02385b8b-028a-4af7-b56d-8c816d80a3b6","Type":"ContainerStarted","Data":"4e49f605ca000db0fce245d8a36314b8e35d47658e610932ed7268a21dff1c6f"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.288876 4681 generic.go:334] "Generic (PLEG): container finished" podID="9fb2e180-6c70-447e-9c60-f62844cf5779" containerID="89599e71bf1b86dc105dde2b3b01edca27c2695d8606225e6dc3d63686128c6e" exitCode=0 Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.288956 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzq2t" event={"ID":"9fb2e180-6c70-447e-9c60-f62844cf5779","Type":"ContainerDied","Data":"89599e71bf1b86dc105dde2b3b01edca27c2695d8606225e6dc3d63686128c6e"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.288987 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzq2t" event={"ID":"9fb2e180-6c70-447e-9c60-f62844cf5779","Type":"ContainerStarted","Data":"3d8c1e86b1a5fbe6127c672c4b82f9e0fb0795c3b6772ad7f53337b03c00960e"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.292363 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25" exitCode=0 Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.292426 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.292445 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"2de53e8387551d77fba4dfb5cb5ce0f311e59b152a70840563ac4923aa86b283"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.293998 4681 generic.go:334] "Generic (PLEG): container finished" podID="3f343d03-be6b-4b7f-8313-1ed68abf5d7a" containerID="6637b38be8cf73bea9cceae235ad004a2f130ae3712fac221c339d6e002708a3" exitCode=0 Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.294036 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tz749" event={"ID":"3f343d03-be6b-4b7f-8313-1ed68abf5d7a","Type":"ContainerDied","Data":"6637b38be8cf73bea9cceae235ad004a2f130ae3712fac221c339d6e002708a3"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.294065 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tz749" event={"ID":"3f343d03-be6b-4b7f-8313-1ed68abf5d7a","Type":"ContainerStarted","Data":"5656c74b2a0b76e6707050f6dcfd11bd5c8fe6e3ead1da69998e9f05650f359d"} Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.315013 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pf4pb" podStartSLOduration=2.772119238 podStartE2EDuration="4.315004154s" podCreationTimestamp="2025-11-23 06:48:09 +0000 UTC" firstStartedPulling="2025-11-23 06:48:11.266027378 +0000 UTC m=+228.335536615" lastFinishedPulling="2025-11-23 06:48:12.808912295 +0000 UTC m=+229.878421531" observedRunningTime="2025-11-23 06:48:13.311676755 +0000 UTC m=+230.381185993" watchObservedRunningTime="2025-11-23 06:48:13.315004154 +0000 UTC m=+230.384513392" Nov 23 06:48:13 crc kubenswrapper[4681]: I1123 06:48:13.332338 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xs7np" podStartSLOduration=2.724554952 podStartE2EDuration="4.332331004s" podCreationTimestamp="2025-11-23 06:48:09 +0000 UTC" firstStartedPulling="2025-11-23 06:48:11.260661104 +0000 UTC m=+228.330170330" lastFinishedPulling="2025-11-23 06:48:12.868437145 +0000 UTC m=+229.937946382" observedRunningTime="2025-11-23 06:48:13.329085673 +0000 UTC m=+230.398594911" watchObservedRunningTime="2025-11-23 06:48:13.332331004 +0000 UTC m=+230.401840242" Nov 23 06:48:15 crc kubenswrapper[4681]: I1123 06:48:15.303276 4681 generic.go:334] "Generic (PLEG): container finished" podID="3f343d03-be6b-4b7f-8313-1ed68abf5d7a" containerID="23694004abe2e8cdb1711489af561c3057d230058afdaa546e8c6e710342f084" exitCode=0 Nov 23 06:48:15 crc kubenswrapper[4681]: I1123 06:48:15.303499 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tz749" event={"ID":"3f343d03-be6b-4b7f-8313-1ed68abf5d7a","Type":"ContainerDied","Data":"23694004abe2e8cdb1711489af561c3057d230058afdaa546e8c6e710342f084"} Nov 23 06:48:15 crc kubenswrapper[4681]: I1123 06:48:15.305991 4681 generic.go:334] "Generic (PLEG): container finished" podID="9fb2e180-6c70-447e-9c60-f62844cf5779" containerID="0bb42bdd039276a14ff2e957c247ba3529467df2938163201415a7544fca9833" exitCode=0 Nov 23 06:48:15 crc kubenswrapper[4681]: I1123 06:48:15.306030 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzq2t" event={"ID":"9fb2e180-6c70-447e-9c60-f62844cf5779","Type":"ContainerDied","Data":"0bb42bdd039276a14ff2e957c247ba3529467df2938163201415a7544fca9833"} Nov 23 06:48:16 crc kubenswrapper[4681]: I1123 06:48:16.313011 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tz749" event={"ID":"3f343d03-be6b-4b7f-8313-1ed68abf5d7a","Type":"ContainerStarted","Data":"6fd8a89f24a47ce341ab3dc764f7487e802a90b42a6c6f6c78377e31c18ba5ab"} Nov 23 06:48:16 crc kubenswrapper[4681]: I1123 06:48:16.314768 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzq2t" event={"ID":"9fb2e180-6c70-447e-9c60-f62844cf5779","Type":"ContainerStarted","Data":"2240a6fda275bc6f0f74753a43fe012a0f753cc0c4a6d640ef1e1c91fbc49838"} Nov 23 06:48:16 crc kubenswrapper[4681]: I1123 06:48:16.326701 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tz749" podStartSLOduration=1.76238641 podStartE2EDuration="4.326692037s" podCreationTimestamp="2025-11-23 06:48:12 +0000 UTC" firstStartedPulling="2025-11-23 06:48:13.294864137 +0000 UTC m=+230.364373374" lastFinishedPulling="2025-11-23 06:48:15.859169764 +0000 UTC m=+232.928679001" observedRunningTime="2025-11-23 06:48:16.32340141 +0000 UTC m=+233.392910647" watchObservedRunningTime="2025-11-23 06:48:16.326692037 +0000 UTC m=+233.396201275" Nov 23 06:48:19 crc kubenswrapper[4681]: I1123 06:48:19.879490 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:19 crc kubenswrapper[4681]: I1123 06:48:19.879904 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:19 crc kubenswrapper[4681]: I1123 06:48:19.906764 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:19 crc kubenswrapper[4681]: I1123 06:48:19.924634 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hzq2t" podStartSLOduration=6.3711377 podStartE2EDuration="8.924597931s" podCreationTimestamp="2025-11-23 06:48:11 +0000 UTC" firstStartedPulling="2025-11-23 06:48:13.291143797 +0000 UTC m=+230.360653035" lastFinishedPulling="2025-11-23 06:48:15.844604029 +0000 UTC m=+232.914113266" observedRunningTime="2025-11-23 06:48:16.340777715 +0000 UTC m=+233.410286953" watchObservedRunningTime="2025-11-23 06:48:19.924597931 +0000 UTC m=+236.994107168" Nov 23 06:48:20 crc kubenswrapper[4681]: I1123 06:48:20.073842 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:20 crc kubenswrapper[4681]: I1123 06:48:20.073880 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:20 crc kubenswrapper[4681]: I1123 06:48:20.100075 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:20 crc kubenswrapper[4681]: I1123 06:48:20.356001 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xs7np" Nov 23 06:48:20 crc kubenswrapper[4681]: I1123 06:48:20.357820 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pf4pb" Nov 23 06:48:22 crc kubenswrapper[4681]: I1123 06:48:22.290746 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:22 crc kubenswrapper[4681]: I1123 06:48:22.292145 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:22 crc kubenswrapper[4681]: I1123 06:48:22.355095 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:22 crc kubenswrapper[4681]: I1123 06:48:22.408210 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hzq2t" Nov 23 06:48:22 crc kubenswrapper[4681]: I1123 06:48:22.464691 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:22 crc kubenswrapper[4681]: I1123 06:48:22.465249 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:22 crc kubenswrapper[4681]: I1123 06:48:22.511651 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:48:23 crc kubenswrapper[4681]: I1123 06:48:23.369937 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tz749" Nov 23 06:49:23 crc kubenswrapper[4681]: I1123 06:49:23.131242 4681 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 23 06:50:12 crc kubenswrapper[4681]: I1123 06:50:12.295537 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:50:12 crc kubenswrapper[4681]: I1123 06:50:12.296292 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:50:42 crc kubenswrapper[4681]: I1123 06:50:42.295745 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:50:42 crc kubenswrapper[4681]: I1123 06:50:42.297211 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.571664 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9qdr"] Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.577569 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.597401 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9qdr"] Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.732530 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.732601 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-trusted-ca\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.732676 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.732758 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.732839 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-bound-sa-token\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.732893 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-registry-tls\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.732968 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf5x\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-kube-api-access-jcf5x\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.733003 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-registry-certificates\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.764991 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.834500 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-trusted-ca\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.834736 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.834863 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.835359 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.836068 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-bound-sa-token\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.836122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-registry-tls\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.836181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcf5x\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-kube-api-access-jcf5x\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.836217 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-registry-certificates\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.836354 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-trusted-ca\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.838148 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-registry-certificates\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.842296 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-registry-tls\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.842299 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.850232 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-bound-sa-token\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.852286 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcf5x\" (UniqueName: \"kubernetes.io/projected/0f0b835c-426c-4a0d-bcd7-17097cadf0a8-kube-api-access-jcf5x\") pod \"image-registry-66df7c8f76-t9qdr\" (UID: \"0f0b835c-426c-4a0d-bcd7-17097cadf0a8\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:01 crc kubenswrapper[4681]: I1123 06:51:01.893412 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:02 crc kubenswrapper[4681]: I1123 06:51:02.050031 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9qdr"] Nov 23 06:51:02 crc kubenswrapper[4681]: W1123 06:51:02.057619 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f0b835c_426c_4a0d_bcd7_17097cadf0a8.slice/crio-9c15d1e2189b294f948a0bda8969881abe8b70898d77ca26e87c7e8f5182934c WatchSource:0}: Error finding container 9c15d1e2189b294f948a0bda8969881abe8b70898d77ca26e87c7e8f5182934c: Status 404 returned error can't find the container with id 9c15d1e2189b294f948a0bda8969881abe8b70898d77ca26e87c7e8f5182934c Nov 23 06:51:03 crc kubenswrapper[4681]: I1123 06:51:03.041515 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" event={"ID":"0f0b835c-426c-4a0d-bcd7-17097cadf0a8","Type":"ContainerStarted","Data":"59efe45f0fde35b21d32602509b76d1c5f76b8bd8bf30e50aef88ede6ccb31d8"} Nov 23 06:51:03 crc kubenswrapper[4681]: I1123 06:51:03.042449 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:03 crc kubenswrapper[4681]: I1123 06:51:03.042565 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" event={"ID":"0f0b835c-426c-4a0d-bcd7-17097cadf0a8","Type":"ContainerStarted","Data":"9c15d1e2189b294f948a0bda8969881abe8b70898d77ca26e87c7e8f5182934c"} Nov 23 06:51:03 crc kubenswrapper[4681]: I1123 06:51:03.061911 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" podStartSLOduration=2.061886476 podStartE2EDuration="2.061886476s" podCreationTimestamp="2025-11-23 06:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:51:03.059207231 +0000 UTC m=+400.128716468" watchObservedRunningTime="2025-11-23 06:51:03.061886476 +0000 UTC m=+400.131395712" Nov 23 06:51:12 crc kubenswrapper[4681]: I1123 06:51:12.296060 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:51:12 crc kubenswrapper[4681]: I1123 06:51:12.297365 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:51:12 crc kubenswrapper[4681]: I1123 06:51:12.297447 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:51:12 crc kubenswrapper[4681]: I1123 06:51:12.298157 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2de53e8387551d77fba4dfb5cb5ce0f311e59b152a70840563ac4923aa86b283"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:51:12 crc kubenswrapper[4681]: I1123 06:51:12.298235 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://2de53e8387551d77fba4dfb5cb5ce0f311e59b152a70840563ac4923aa86b283" gracePeriod=600 Nov 23 06:51:13 crc kubenswrapper[4681]: I1123 06:51:13.082708 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="2de53e8387551d77fba4dfb5cb5ce0f311e59b152a70840563ac4923aa86b283" exitCode=0 Nov 23 06:51:13 crc kubenswrapper[4681]: I1123 06:51:13.082779 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"2de53e8387551d77fba4dfb5cb5ce0f311e59b152a70840563ac4923aa86b283"} Nov 23 06:51:13 crc kubenswrapper[4681]: I1123 06:51:13.083004 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"9fa8fec50b296212aef5b2ad5824bdfb0e0ff8b77199951e5391ad3ba5cad98c"} Nov 23 06:51:13 crc kubenswrapper[4681]: I1123 06:51:13.083029 4681 scope.go:117] "RemoveContainer" containerID="632f45cf73355a1d798a8c282e87abc8cc0e98af80c717ea52de3d0f9a885b25" Nov 23 06:51:21 crc kubenswrapper[4681]: I1123 06:51:21.898073 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t9qdr" Nov 23 06:51:21 crc kubenswrapper[4681]: I1123 06:51:21.945893 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c2pf5"] Nov 23 06:51:46 crc kubenswrapper[4681]: I1123 06:51:46.978321 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" podUID="77f5ceda-2966-443e-a939-dd7408e66bdc" containerName="registry" containerID="cri-o://3ee984309fa8ce33e23cdf6fc6b644a32685973fac9472dd105a0d6e45df0b48" gracePeriod=30 Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.262233 4681 generic.go:334] "Generic (PLEG): container finished" podID="77f5ceda-2966-443e-a939-dd7408e66bdc" containerID="3ee984309fa8ce33e23cdf6fc6b644a32685973fac9472dd105a0d6e45df0b48" exitCode=0 Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.262288 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" event={"ID":"77f5ceda-2966-443e-a939-dd7408e66bdc","Type":"ContainerDied","Data":"3ee984309fa8ce33e23cdf6fc6b644a32685973fac9472dd105a0d6e45df0b48"} Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.262321 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" event={"ID":"77f5ceda-2966-443e-a939-dd7408e66bdc","Type":"ContainerDied","Data":"1815ab2337eac67470f4fe2fa16bcb0ac4d2178b9c3dfd0acd5ee4f2a9f6d208"} Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.262336 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1815ab2337eac67470f4fe2fa16bcb0ac4d2178b9c3dfd0acd5ee4f2a9f6d208" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.279115 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.469114 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/77f5ceda-2966-443e-a939-dd7408e66bdc-installation-pull-secrets\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.469550 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/77f5ceda-2966-443e-a939-dd7408e66bdc-ca-trust-extracted\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.469812 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.469908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-certificates\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.470020 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-bound-sa-token\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.470122 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-trusted-ca\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.470234 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-tls\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.470313 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74g84\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-kube-api-access-74g84\") pod \"77f5ceda-2966-443e-a939-dd7408e66bdc\" (UID: \"77f5ceda-2966-443e-a939-dd7408e66bdc\") " Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.470751 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.470805 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.476620 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-kube-api-access-74g84" (OuterVolumeSpecName: "kube-api-access-74g84") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "kube-api-access-74g84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.476920 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.483573 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f5ceda-2966-443e-a939-dd7408e66bdc-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.484300 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.485092 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.486372 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f5ceda-2966-443e-a939-dd7408e66bdc-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "77f5ceda-2966-443e-a939-dd7408e66bdc" (UID: "77f5ceda-2966-443e-a939-dd7408e66bdc"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.571886 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.571923 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.571933 4681 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.571945 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74g84\" (UniqueName: \"kubernetes.io/projected/77f5ceda-2966-443e-a939-dd7408e66bdc-kube-api-access-74g84\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.571964 4681 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/77f5ceda-2966-443e-a939-dd7408e66bdc-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.571973 4681 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/77f5ceda-2966-443e-a939-dd7408e66bdc-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:47 crc kubenswrapper[4681]: I1123 06:51:47.571983 4681 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/77f5ceda-2966-443e-a939-dd7408e66bdc-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:48 crc kubenswrapper[4681]: I1123 06:51:48.266312 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c2pf5" Nov 23 06:51:48 crc kubenswrapper[4681]: I1123 06:51:48.287189 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c2pf5"] Nov 23 06:51:48 crc kubenswrapper[4681]: I1123 06:51:48.291675 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c2pf5"] Nov 23 06:51:49 crc kubenswrapper[4681]: I1123 06:51:49.257339 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f5ceda-2966-443e-a939-dd7408e66bdc" path="/var/lib/kubelet/pods/77f5ceda-2966-443e-a939-dd7408e66bdc/volumes" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.937942 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qqpl4"] Nov 23 06:52:35 crc kubenswrapper[4681]: E1123 06:52:35.938732 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f5ceda-2966-443e-a939-dd7408e66bdc" containerName="registry" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.938743 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f5ceda-2966-443e-a939-dd7408e66bdc" containerName="registry" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.938825 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f5ceda-2966-443e-a939-dd7408e66bdc" containerName="registry" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.939173 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.941064 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.941219 4681 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4q7l4" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.945276 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.948217 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bf72g"] Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.948685 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bf72g" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.953547 4681 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wssnr" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.959547 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b6n98"] Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.960728 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.962660 4681 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-x2jtq" Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.964631 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qqpl4"] Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.967946 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bf72g"] Nov 23 06:52:35 crc kubenswrapper[4681]: I1123 06:52:35.978683 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b6n98"] Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.095638 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpccv\" (UniqueName: \"kubernetes.io/projected/24cadfaf-a947-4710-adfc-70d52cef535c-kube-api-access-fpccv\") pod \"cert-manager-cainjector-7f985d654d-qqpl4\" (UID: \"24cadfaf-a947-4710-adfc-70d52cef535c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.095715 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94t2\" (UniqueName: \"kubernetes.io/projected/d7fdbc42-be51-488f-a9b0-96442945a494-kube-api-access-b94t2\") pod \"cert-manager-webhook-5655c58dd6-b6n98\" (UID: \"d7fdbc42-be51-488f-a9b0-96442945a494\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.095753 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gf8\" (UniqueName: \"kubernetes.io/projected/05dbef3e-8d12-4431-922c-38ab64849182-kube-api-access-v5gf8\") pod \"cert-manager-5b446d88c5-bf72g\" (UID: \"05dbef3e-8d12-4431-922c-38ab64849182\") " pod="cert-manager/cert-manager-5b446d88c5-bf72g" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.197304 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpccv\" (UniqueName: \"kubernetes.io/projected/24cadfaf-a947-4710-adfc-70d52cef535c-kube-api-access-fpccv\") pod \"cert-manager-cainjector-7f985d654d-qqpl4\" (UID: \"24cadfaf-a947-4710-adfc-70d52cef535c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.197362 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94t2\" (UniqueName: \"kubernetes.io/projected/d7fdbc42-be51-488f-a9b0-96442945a494-kube-api-access-b94t2\") pod \"cert-manager-webhook-5655c58dd6-b6n98\" (UID: \"d7fdbc42-be51-488f-a9b0-96442945a494\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.197400 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gf8\" (UniqueName: \"kubernetes.io/projected/05dbef3e-8d12-4431-922c-38ab64849182-kube-api-access-v5gf8\") pod \"cert-manager-5b446d88c5-bf72g\" (UID: \"05dbef3e-8d12-4431-922c-38ab64849182\") " pod="cert-manager/cert-manager-5b446d88c5-bf72g" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.215048 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gf8\" (UniqueName: \"kubernetes.io/projected/05dbef3e-8d12-4431-922c-38ab64849182-kube-api-access-v5gf8\") pod \"cert-manager-5b446d88c5-bf72g\" (UID: \"05dbef3e-8d12-4431-922c-38ab64849182\") " pod="cert-manager/cert-manager-5b446d88c5-bf72g" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.215516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94t2\" (UniqueName: \"kubernetes.io/projected/d7fdbc42-be51-488f-a9b0-96442945a494-kube-api-access-b94t2\") pod \"cert-manager-webhook-5655c58dd6-b6n98\" (UID: \"d7fdbc42-be51-488f-a9b0-96442945a494\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.216422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpccv\" (UniqueName: \"kubernetes.io/projected/24cadfaf-a947-4710-adfc-70d52cef535c-kube-api-access-fpccv\") pod \"cert-manager-cainjector-7f985d654d-qqpl4\" (UID: \"24cadfaf-a947-4710-adfc-70d52cef535c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.256510 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.263080 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bf72g" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.272356 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.442338 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qqpl4"] Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.456744 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.509500 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" event={"ID":"24cadfaf-a947-4710-adfc-70d52cef535c","Type":"ContainerStarted","Data":"dabc7af20fc61fc562e3db25c00e40017980c2982a0c91ab1994fa8e0c6b0cf9"} Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.673498 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b6n98"] Nov 23 06:52:36 crc kubenswrapper[4681]: W1123 06:52:36.678267 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7fdbc42_be51_488f_a9b0_96442945a494.slice/crio-396bf4b30ad82875a286ef9640cb36f6a4f3a840c3c56f620af5d795c64ae2a6 WatchSource:0}: Error finding container 396bf4b30ad82875a286ef9640cb36f6a4f3a840c3c56f620af5d795c64ae2a6: Status 404 returned error can't find the container with id 396bf4b30ad82875a286ef9640cb36f6a4f3a840c3c56f620af5d795c64ae2a6 Nov 23 06:52:36 crc kubenswrapper[4681]: I1123 06:52:36.679850 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bf72g"] Nov 23 06:52:36 crc kubenswrapper[4681]: W1123 06:52:36.685444 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05dbef3e_8d12_4431_922c_38ab64849182.slice/crio-1e334c2b30323991d7bd4ce4c68e0c9567a50aef91b32f3bf4973becad40620c WatchSource:0}: Error finding container 1e334c2b30323991d7bd4ce4c68e0c9567a50aef91b32f3bf4973becad40620c: Status 404 returned error can't find the container with id 1e334c2b30323991d7bd4ce4c68e0c9567a50aef91b32f3bf4973becad40620c Nov 23 06:52:37 crc kubenswrapper[4681]: I1123 06:52:37.514635 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bf72g" event={"ID":"05dbef3e-8d12-4431-922c-38ab64849182","Type":"ContainerStarted","Data":"1e334c2b30323991d7bd4ce4c68e0c9567a50aef91b32f3bf4973becad40620c"} Nov 23 06:52:37 crc kubenswrapper[4681]: I1123 06:52:37.515284 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" event={"ID":"d7fdbc42-be51-488f-a9b0-96442945a494","Type":"ContainerStarted","Data":"396bf4b30ad82875a286ef9640cb36f6a4f3a840c3c56f620af5d795c64ae2a6"} Nov 23 06:52:38 crc kubenswrapper[4681]: I1123 06:52:38.523325 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" event={"ID":"24cadfaf-a947-4710-adfc-70d52cef535c","Type":"ContainerStarted","Data":"086e315eff654ae81bda06550faf03afb65d1934fd3e576633f35ec0a9655fec"} Nov 23 06:52:38 crc kubenswrapper[4681]: I1123 06:52:38.534415 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-qqpl4" podStartSLOduration=1.9733950999999998 podStartE2EDuration="3.53439818s" podCreationTimestamp="2025-11-23 06:52:35 +0000 UTC" firstStartedPulling="2025-11-23 06:52:36.455374802 +0000 UTC m=+493.524884038" lastFinishedPulling="2025-11-23 06:52:38.01637788 +0000 UTC m=+495.085887118" observedRunningTime="2025-11-23 06:52:38.53340117 +0000 UTC m=+495.602910406" watchObservedRunningTime="2025-11-23 06:52:38.53439818 +0000 UTC m=+495.603907417" Nov 23 06:52:40 crc kubenswrapper[4681]: I1123 06:52:40.540302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bf72g" event={"ID":"05dbef3e-8d12-4431-922c-38ab64849182","Type":"ContainerStarted","Data":"9ddb5184bec1cf64aec656816bc39b6cb78268b27a45ad71961933538a071fce"} Nov 23 06:52:40 crc kubenswrapper[4681]: I1123 06:52:40.543432 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" event={"ID":"d7fdbc42-be51-488f-a9b0-96442945a494","Type":"ContainerStarted","Data":"bb64041ad7d5cece2799727379cee496328b43d656095b4c71867aeb705e6384"} Nov 23 06:52:40 crc kubenswrapper[4681]: I1123 06:52:40.543584 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" Nov 23 06:52:40 crc kubenswrapper[4681]: I1123 06:52:40.554529 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-bf72g" podStartSLOduration=2.784669274 podStartE2EDuration="5.554508647s" podCreationTimestamp="2025-11-23 06:52:35 +0000 UTC" firstStartedPulling="2025-11-23 06:52:36.687980471 +0000 UTC m=+493.757489709" lastFinishedPulling="2025-11-23 06:52:39.457819845 +0000 UTC m=+496.527329082" observedRunningTime="2025-11-23 06:52:40.553173259 +0000 UTC m=+497.622682496" watchObservedRunningTime="2025-11-23 06:52:40.554508647 +0000 UTC m=+497.624017883" Nov 23 06:52:40 crc kubenswrapper[4681]: I1123 06:52:40.564985 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" podStartSLOduration=2.798797083 podStartE2EDuration="5.564977037s" podCreationTimestamp="2025-11-23 06:52:35 +0000 UTC" firstStartedPulling="2025-11-23 06:52:36.687190521 +0000 UTC m=+493.756699758" lastFinishedPulling="2025-11-23 06:52:39.453370485 +0000 UTC m=+496.522879712" observedRunningTime="2025-11-23 06:52:40.562791506 +0000 UTC m=+497.632300742" watchObservedRunningTime="2025-11-23 06:52:40.564977037 +0000 UTC m=+497.634486263" Nov 23 06:52:46 crc kubenswrapper[4681]: I1123 06:52:46.275635 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-b6n98" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.162343 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l6bqb"] Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.162748 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-controller" containerID="cri-o://5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.162798 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="nbdb" containerID="cri-o://9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.162881 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="northd" containerID="cri-o://14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.162932 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="sbdb" containerID="cri-o://8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.162935 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-acl-logging" containerID="cri-o://3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.163078 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-node" containerID="cri-o://2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.163105 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.204520 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" containerID="cri-o://d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" gracePeriod=30 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.464025 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/3.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.466548 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovn-acl-logging/0.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.467099 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovn-controller/0.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.467558 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.512945 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-btj59"] Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.513413 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="nbdb" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.513508 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="nbdb" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.513563 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.513612 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.513658 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.513696 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.513742 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="sbdb" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.513779 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="sbdb" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.513819 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.513860 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.513911 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.513953 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.513995 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514035 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.514080 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514124 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.514175 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-node" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514233 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-node" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.514282 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="northd" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514325 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="northd" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.514371 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kubecfg-setup" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514429 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kubecfg-setup" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.514490 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-acl-logging" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514539 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-acl-logging" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514717 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514773 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514819 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="nbdb" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.514966 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="northd" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515014 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-node" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515056 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515098 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="sbdb" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515141 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515182 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovn-acl-logging" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515225 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.515401 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515452 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515650 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.515900 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerName="ovnkube-controller" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.517875 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528684 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-log-socket\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528728 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-ovn\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528760 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovn-node-metrics-cert\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528787 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-config\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528807 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-systemd\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-script-lib\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528849 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-slash\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528868 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-env-overrides\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528956 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-run-netns\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528989 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529005 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-cni-netd\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529043 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-node-log\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529064 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-systemd-units\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529084 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-slash\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529106 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-var-lib-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529124 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-log-socket\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529150 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-ovnkube-script-lib\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529171 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-cni-bin\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529190 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-systemd\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529210 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-ovn\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529231 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-ovnkube-config\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529252 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-kubelet\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529284 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529311 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d584daff-bd10-470a-9f2c-00d09ecded42-ovn-node-metrics-cert\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8xl\" (UniqueName: \"kubernetes.io/projected/d584daff-bd10-470a-9f2c-00d09ecded42-kube-api-access-dh8xl\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529356 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-run-ovn-kubernetes\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529392 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-etc-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529435 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-env-overrides\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.528801 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-log-socket" (OuterVolumeSpecName: "log-socket") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529297 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529652 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-slash" (OuterVolumeSpecName: "host-slash") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.529988 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.530256 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.539272 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.543706 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.575245 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovnkube-controller/3.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.577321 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovn-acl-logging/0.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.577816 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l6bqb_1abfb530-b7ac-4724-8e43-d87ef92f1949/ovn-controller/0.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578137 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" exitCode=0 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578388 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" exitCode=0 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578384 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578399 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" exitCode=0 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578407 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" exitCode=0 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578414 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" exitCode=0 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578441 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" exitCode=0 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578444 4681 scope.go:117] "RemoveContainer" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578447 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" exitCode=143 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578484 4681 generic.go:334] "Generic (PLEG): container finished" podID="1abfb530-b7ac-4724-8e43-d87ef92f1949" containerID="5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" exitCode=143 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578429 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578558 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578571 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578582 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578601 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578629 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578635 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578639 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578644 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578649 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578653 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578658 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578667 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578656 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578675 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578684 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578705 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578710 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578715 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578720 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578725 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578730 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578734 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578739 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578932 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578944 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578955 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578962 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578967 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578974 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578979 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.578984 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579006 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579011 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579016 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579021 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579029 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6bqb" event={"ID":"1abfb530-b7ac-4724-8e43-d87ef92f1949","Type":"ContainerDied","Data":"4f8e447722bd3f219f03be4cbc14a7478fe37b3257379cd2dadcc737c8283ec6"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579038 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579045 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579050 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579056 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579061 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579067 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579088 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579096 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579101 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.579106 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.580203 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/2.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.580803 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/1.log" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.580832 4681 generic.go:334] "Generic (PLEG): container finished" podID="4094b291-8b0b-43c0-96e9-f08a9ef53c8b" containerID="dcf9640496fa8d1e0179de62ae7b6c308f4bb9fc5abaeebd84239dba5e101a53" exitCode=2 Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.580850 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerDied","Data":"dcf9640496fa8d1e0179de62ae7b6c308f4bb9fc5abaeebd84239dba5e101a53"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.580864 4681 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4"} Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.581520 4681 scope.go:117] "RemoveContainer" containerID="dcf9640496fa8d1e0179de62ae7b6c308f4bb9fc5abaeebd84239dba5e101a53" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.581695 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2lhx5_openshift-multus(4094b291-8b0b-43c0-96e9-f08a9ef53c8b)\"" pod="openshift-multus/multus-2lhx5" podUID="4094b291-8b0b-43c0-96e9-f08a9ef53c8b" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.593399 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.610517 4681 scope.go:117] "RemoveContainer" containerID="8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.622126 4681 scope.go:117] "RemoveContainer" containerID="9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.629940 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-etc-openvswitch\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.629985 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630007 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-var-lib-openvswitch\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630020 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630042 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-kubelet\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630048 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630057 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630069 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-node-log\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630081 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630163 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-node-log" (OuterVolumeSpecName: "node-log") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630250 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-systemd-units\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630440 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630505 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-bin\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630661 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630761 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630773 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-netd\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630808 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-openvswitch\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630856 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-ovn-kubernetes\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630883 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630889 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcbfd\" (UniqueName: \"kubernetes.io/projected/1abfb530-b7ac-4724-8e43-d87ef92f1949-kube-api-access-vcbfd\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.630904 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.631037 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-netns\") pod \"1abfb530-b7ac-4724-8e43-d87ef92f1949\" (UID: \"1abfb530-b7ac-4724-8e43-d87ef92f1949\") " Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.631095 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.631296 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-ovnkube-config\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632232 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-kubelet\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632168 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-ovnkube-config\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632411 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632484 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632517 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d584daff-bd10-470a-9f2c-00d09ecded42-ovn-node-metrics-cert\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632548 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8xl\" (UniqueName: \"kubernetes.io/projected/d584daff-bd10-470a-9f2c-00d09ecded42-kube-api-access-dh8xl\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632412 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-kubelet\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632892 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-run-ovn-kubernetes\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.632956 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-etc-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633046 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-env-overrides\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633051 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-run-ovn-kubernetes\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633100 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-run-netns\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633082 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-run-netns\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633169 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633190 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-cni-netd\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633272 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-node-log\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633310 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-etc-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633328 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-systemd-units\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633355 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-slash\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633371 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-var-lib-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633412 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-cni-netd\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633423 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-log-socket\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633439 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633474 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-slash\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633490 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-env-overrides\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633505 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-ovnkube-script-lib\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633511 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-systemd-units\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633495 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-node-log\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633530 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-cni-bin\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633556 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-var-lib-openvswitch\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633556 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-log-socket\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633575 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-host-cni-bin\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633669 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-systemd\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633692 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-systemd\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633741 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-ovn\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633940 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abfb530-b7ac-4724-8e43-d87ef92f1949-kube-api-access-vcbfd" (OuterVolumeSpecName: "kube-api-access-vcbfd") pod "1abfb530-b7ac-4724-8e43-d87ef92f1949" (UID: "1abfb530-b7ac-4724-8e43-d87ef92f1949"). InnerVolumeSpecName "kube-api-access-vcbfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.633964 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d584daff-bd10-470a-9f2c-00d09ecded42-ovnkube-script-lib\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634390 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d584daff-bd10-470a-9f2c-00d09ecded42-run-ovn\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634438 4681 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-slash\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634454 4681 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634523 4681 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634532 4681 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634545 4681 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634554 4681 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634563 4681 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634572 4681 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-node-log\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634605 4681 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634626 4681 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634639 4681 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634648 4681 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634658 4681 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634668 4681 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-log-socket\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634677 4681 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634686 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634695 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634704 4681 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1abfb530-b7ac-4724-8e43-d87ef92f1949-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.634712 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1abfb530-b7ac-4724-8e43-d87ef92f1949-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.635413 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d584daff-bd10-470a-9f2c-00d09ecded42-ovn-node-metrics-cert\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.639197 4681 scope.go:117] "RemoveContainer" containerID="14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.645819 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8xl\" (UniqueName: \"kubernetes.io/projected/d584daff-bd10-470a-9f2c-00d09ecded42-kube-api-access-dh8xl\") pod \"ovnkube-node-btj59\" (UID: \"d584daff-bd10-470a-9f2c-00d09ecded42\") " pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.650152 4681 scope.go:117] "RemoveContainer" containerID="edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.663868 4681 scope.go:117] "RemoveContainer" containerID="2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.676085 4681 scope.go:117] "RemoveContainer" containerID="3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.688669 4681 scope.go:117] "RemoveContainer" containerID="5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.699348 4681 scope.go:117] "RemoveContainer" containerID="8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.716119 4681 scope.go:117] "RemoveContainer" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.716437 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": container with ID starting with d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf not found: ID does not exist" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.716504 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} err="failed to get container status \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": rpc error: code = NotFound desc = could not find container \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": container with ID starting with d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.716531 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.716824 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": container with ID starting with 1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72 not found: ID does not exist" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.716854 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} err="failed to get container status \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": rpc error: code = NotFound desc = could not find container \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": container with ID starting with 1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.716875 4681 scope.go:117] "RemoveContainer" containerID="8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.717120 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": container with ID starting with 8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7 not found: ID does not exist" containerID="8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.717151 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} err="failed to get container status \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": rpc error: code = NotFound desc = could not find container \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": container with ID starting with 8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.717190 4681 scope.go:117] "RemoveContainer" containerID="9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.717525 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": container with ID starting with 9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d not found: ID does not exist" containerID="9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.717547 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} err="failed to get container status \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": rpc error: code = NotFound desc = could not find container \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": container with ID starting with 9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.717562 4681 scope.go:117] "RemoveContainer" containerID="14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.717771 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": container with ID starting with 14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4 not found: ID does not exist" containerID="14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.717799 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} err="failed to get container status \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": rpc error: code = NotFound desc = could not find container \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": container with ID starting with 14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.717818 4681 scope.go:117] "RemoveContainer" containerID="edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.718047 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": container with ID starting with edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884 not found: ID does not exist" containerID="edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718069 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} err="failed to get container status \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": rpc error: code = NotFound desc = could not find container \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": container with ID starting with edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718083 4681 scope.go:117] "RemoveContainer" containerID="2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.718311 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": container with ID starting with 2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101 not found: ID does not exist" containerID="2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718337 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} err="failed to get container status \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": rpc error: code = NotFound desc = could not find container \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": container with ID starting with 2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718350 4681 scope.go:117] "RemoveContainer" containerID="3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.718586 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": container with ID starting with 3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13 not found: ID does not exist" containerID="3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718616 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} err="failed to get container status \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": rpc error: code = NotFound desc = could not find container \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": container with ID starting with 3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718630 4681 scope.go:117] "RemoveContainer" containerID="5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.718858 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": container with ID starting with 5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9 not found: ID does not exist" containerID="5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718879 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} err="failed to get container status \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": rpc error: code = NotFound desc = could not find container \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": container with ID starting with 5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.718892 4681 scope.go:117] "RemoveContainer" containerID="8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7" Nov 23 06:52:47 crc kubenswrapper[4681]: E1123 06:52:47.719112 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": container with ID starting with 8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7 not found: ID does not exist" containerID="8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719140 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} err="failed to get container status \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": rpc error: code = NotFound desc = could not find container \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": container with ID starting with 8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719154 4681 scope.go:117] "RemoveContainer" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719342 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} err="failed to get container status \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": rpc error: code = NotFound desc = could not find container \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": container with ID starting with d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719360 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719650 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} err="failed to get container status \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": rpc error: code = NotFound desc = could not find container \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": container with ID starting with 1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719671 4681 scope.go:117] "RemoveContainer" containerID="8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719877 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} err="failed to get container status \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": rpc error: code = NotFound desc = could not find container \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": container with ID starting with 8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.719902 4681 scope.go:117] "RemoveContainer" containerID="9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.720087 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} err="failed to get container status \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": rpc error: code = NotFound desc = could not find container \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": container with ID starting with 9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.720108 4681 scope.go:117] "RemoveContainer" containerID="14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.720303 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} err="failed to get container status \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": rpc error: code = NotFound desc = could not find container \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": container with ID starting with 14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.720322 4681 scope.go:117] "RemoveContainer" containerID="edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.721838 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} err="failed to get container status \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": rpc error: code = NotFound desc = could not find container \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": container with ID starting with edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.721857 4681 scope.go:117] "RemoveContainer" containerID="2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.722946 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} err="failed to get container status \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": rpc error: code = NotFound desc = could not find container \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": container with ID starting with 2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.722968 4681 scope.go:117] "RemoveContainer" containerID="3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723182 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} err="failed to get container status \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": rpc error: code = NotFound desc = could not find container \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": container with ID starting with 3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723202 4681 scope.go:117] "RemoveContainer" containerID="5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723441 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} err="failed to get container status \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": rpc error: code = NotFound desc = could not find container \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": container with ID starting with 5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723474 4681 scope.go:117] "RemoveContainer" containerID="8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723643 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} err="failed to get container status \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": rpc error: code = NotFound desc = could not find container \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": container with ID starting with 8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723669 4681 scope.go:117] "RemoveContainer" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723837 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} err="failed to get container status \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": rpc error: code = NotFound desc = could not find container \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": container with ID starting with d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.723855 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724140 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} err="failed to get container status \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": rpc error: code = NotFound desc = could not find container \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": container with ID starting with 1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724164 4681 scope.go:117] "RemoveContainer" containerID="8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724424 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} err="failed to get container status \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": rpc error: code = NotFound desc = could not find container \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": container with ID starting with 8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724447 4681 scope.go:117] "RemoveContainer" containerID="9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724734 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} err="failed to get container status \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": rpc error: code = NotFound desc = could not find container \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": container with ID starting with 9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724757 4681 scope.go:117] "RemoveContainer" containerID="14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724964 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} err="failed to get container status \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": rpc error: code = NotFound desc = could not find container \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": container with ID starting with 14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.724984 4681 scope.go:117] "RemoveContainer" containerID="edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.725174 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} err="failed to get container status \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": rpc error: code = NotFound desc = could not find container \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": container with ID starting with edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.725195 4681 scope.go:117] "RemoveContainer" containerID="2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.725419 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} err="failed to get container status \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": rpc error: code = NotFound desc = could not find container \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": container with ID starting with 2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.725439 4681 scope.go:117] "RemoveContainer" containerID="3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.725654 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} err="failed to get container status \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": rpc error: code = NotFound desc = could not find container \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": container with ID starting with 3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.725673 4681 scope.go:117] "RemoveContainer" containerID="5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726166 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} err="failed to get container status \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": rpc error: code = NotFound desc = could not find container \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": container with ID starting with 5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726199 4681 scope.go:117] "RemoveContainer" containerID="8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726439 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} err="failed to get container status \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": rpc error: code = NotFound desc = could not find container \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": container with ID starting with 8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726475 4681 scope.go:117] "RemoveContainer" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726667 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} err="failed to get container status \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": rpc error: code = NotFound desc = could not find container \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": container with ID starting with d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726687 4681 scope.go:117] "RemoveContainer" containerID="1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726939 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72"} err="failed to get container status \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": rpc error: code = NotFound desc = could not find container \"1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72\": container with ID starting with 1e662c47e21ad4fc3f1091e8d53999578f1921dadfcbc980c09239a967fb1f72 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.726957 4681 scope.go:117] "RemoveContainer" containerID="8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.727238 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7"} err="failed to get container status \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": rpc error: code = NotFound desc = could not find container \"8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7\": container with ID starting with 8e144f6fcc3caf2665d063df23657f7b48ba28fe75e07674cc2ba13582d06da7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.727268 4681 scope.go:117] "RemoveContainer" containerID="9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.727552 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d"} err="failed to get container status \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": rpc error: code = NotFound desc = could not find container \"9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d\": container with ID starting with 9fb1098327a690ab40d4180e598919c94be498bbdafd3efa48d70de16aa3b57d not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.727570 4681 scope.go:117] "RemoveContainer" containerID="14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.727784 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4"} err="failed to get container status \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": rpc error: code = NotFound desc = could not find container \"14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4\": container with ID starting with 14c8d68f6ffe4e972b37d979e6fd1a6002de557e158f0d73e8a29963700b01a4 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.727802 4681 scope.go:117] "RemoveContainer" containerID="edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.728659 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884"} err="failed to get container status \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": rpc error: code = NotFound desc = could not find container \"edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884\": container with ID starting with edd70e73d3050380ab4c0646964a0644c5fc40a55740743acf48a59cb7b4a884 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.728726 4681 scope.go:117] "RemoveContainer" containerID="2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.729113 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101"} err="failed to get container status \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": rpc error: code = NotFound desc = could not find container \"2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101\": container with ID starting with 2cb058679bcfd68dcbd0f108e2ae9b8fe087b385c01bb73bcd2894b622354101 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.729138 4681 scope.go:117] "RemoveContainer" containerID="3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.729623 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13"} err="failed to get container status \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": rpc error: code = NotFound desc = could not find container \"3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13\": container with ID starting with 3c5940dd8efb65a27f2b74594a05fb8ac0ba51e787205c44ce4439847703bb13 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.729652 4681 scope.go:117] "RemoveContainer" containerID="5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.729891 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9"} err="failed to get container status \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": rpc error: code = NotFound desc = could not find container \"5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9\": container with ID starting with 5822f5696ec7af7446f47739c676a446bc62f8d7e11b8cf8d9611379379300e9 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.729915 4681 scope.go:117] "RemoveContainer" containerID="8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.730123 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7"} err="failed to get container status \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": rpc error: code = NotFound desc = could not find container \"8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7\": container with ID starting with 8b3da6a9576f0d0efc20c541445218c5c9e4c6ec53004b6a59aeed760d0b04c7 not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.730143 4681 scope.go:117] "RemoveContainer" containerID="d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.730393 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf"} err="failed to get container status \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": rpc error: code = NotFound desc = could not find container \"d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf\": container with ID starting with d3ee7b1cd00bbc909ca76a6e898c08dea60471e186c3b7e31f59c07fb0b7bebf not found: ID does not exist" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.736243 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcbfd\" (UniqueName: \"kubernetes.io/projected/1abfb530-b7ac-4724-8e43-d87ef92f1949-kube-api-access-vcbfd\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.829778 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.908755 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l6bqb"] Nov 23 06:52:47 crc kubenswrapper[4681]: I1123 06:52:47.911089 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l6bqb"] Nov 23 06:52:48 crc kubenswrapper[4681]: I1123 06:52:48.587038 4681 generic.go:334] "Generic (PLEG): container finished" podID="d584daff-bd10-470a-9f2c-00d09ecded42" containerID="194446864a98d5b79b21d2bd64c612da4392f052740224f7cfb4442825ffb07c" exitCode=0 Nov 23 06:52:48 crc kubenswrapper[4681]: I1123 06:52:48.587089 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerDied","Data":"194446864a98d5b79b21d2bd64c612da4392f052740224f7cfb4442825ffb07c"} Nov 23 06:52:48 crc kubenswrapper[4681]: I1123 06:52:48.587121 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"283d34193d2b3d65c08ef28fec8c8ec045ecb6f4ad44aeb6b86f6e0423f34d12"} Nov 23 06:52:49 crc kubenswrapper[4681]: I1123 06:52:49.260305 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abfb530-b7ac-4724-8e43-d87ef92f1949" path="/var/lib/kubelet/pods/1abfb530-b7ac-4724-8e43-d87ef92f1949/volumes" Nov 23 06:52:49 crc kubenswrapper[4681]: I1123 06:52:49.595241 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"e188169230a70d74f38411f674b0b8a9a4ea77891f1e8d998899fcc6cf64b112"} Nov 23 06:52:49 crc kubenswrapper[4681]: I1123 06:52:49.595293 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"7782ba79f20663726dff65a6faa5ed6b91eb51ee10542b30c3de000b27240675"} Nov 23 06:52:49 crc kubenswrapper[4681]: I1123 06:52:49.595307 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"07dd07c5ea400d1b432a3b53ad48b086ff9c7e995b2d543e8be6c95816882a2d"} Nov 23 06:52:49 crc kubenswrapper[4681]: I1123 06:52:49.595318 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"94c47091973c8bbe1d574bc5ddc0a81ff3c88bb2eb28effc5c357f9d3958e1e2"} Nov 23 06:52:49 crc kubenswrapper[4681]: I1123 06:52:49.595328 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"14a4583a5fa50ee6f6376b278590cce526a856bafdabf37127669f0fe74797a4"} Nov 23 06:52:49 crc kubenswrapper[4681]: I1123 06:52:49.595338 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"ef731f4bc99c470c3236ac5b0275fa500eb9beb4321c69e842257daf17829654"} Nov 23 06:52:51 crc kubenswrapper[4681]: I1123 06:52:51.608672 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"a6465a6727036384cfe3b2ede6278ff0880a7b5e8926c2d6d5c4c4fdeeef8e94"} Nov 23 06:52:53 crc kubenswrapper[4681]: I1123 06:52:53.622090 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" event={"ID":"d584daff-bd10-470a-9f2c-00d09ecded42","Type":"ContainerStarted","Data":"ff1b4fed8170fcefd33749041542a191adb19236677fbd4d4e0fdd5fa974d9b7"} Nov 23 06:52:53 crc kubenswrapper[4681]: I1123 06:52:53.622404 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:53 crc kubenswrapper[4681]: I1123 06:52:53.622416 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:53 crc kubenswrapper[4681]: I1123 06:52:53.622425 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:53 crc kubenswrapper[4681]: I1123 06:52:53.650748 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:52:53 crc kubenswrapper[4681]: I1123 06:52:53.653231 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" podStartSLOduration=6.653213281 podStartE2EDuration="6.653213281s" podCreationTimestamp="2025-11-23 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:53.652261567 +0000 UTC m=+510.721770804" watchObservedRunningTime="2025-11-23 06:52:53.653213281 +0000 UTC m=+510.722722518" Nov 23 06:52:53 crc kubenswrapper[4681]: I1123 06:52:53.674760 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:53:01 crc kubenswrapper[4681]: I1123 06:53:01.252190 4681 scope.go:117] "RemoveContainer" containerID="dcf9640496fa8d1e0179de62ae7b6c308f4bb9fc5abaeebd84239dba5e101a53" Nov 23 06:53:01 crc kubenswrapper[4681]: E1123 06:53:01.253042 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2lhx5_openshift-multus(4094b291-8b0b-43c0-96e9-f08a9ef53c8b)\"" pod="openshift-multus/multus-2lhx5" podUID="4094b291-8b0b-43c0-96e9-f08a9ef53c8b" Nov 23 06:53:12 crc kubenswrapper[4681]: I1123 06:53:12.296218 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:53:12 crc kubenswrapper[4681]: I1123 06:53:12.296608 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:53:13 crc kubenswrapper[4681]: I1123 06:53:13.253257 4681 scope.go:117] "RemoveContainer" containerID="dcf9640496fa8d1e0179de62ae7b6c308f4bb9fc5abaeebd84239dba5e101a53" Nov 23 06:53:13 crc kubenswrapper[4681]: I1123 06:53:13.733985 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/2.log" Nov 23 06:53:13 crc kubenswrapper[4681]: I1123 06:53:13.735068 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/1.log" Nov 23 06:53:13 crc kubenswrapper[4681]: I1123 06:53:13.735168 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2lhx5" event={"ID":"4094b291-8b0b-43c0-96e9-f08a9ef53c8b","Type":"ContainerStarted","Data":"da02c27f3b1d013626a6a7e1a90ea083639232005a309cf4a960084c9543df19"} Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.690440 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2"] Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.691839 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.693198 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.705866 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2"] Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.766783 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.766858 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwkd\" (UniqueName: \"kubernetes.io/projected/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-kube-api-access-9lwkd\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.766898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.867703 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.867765 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwkd\" (UniqueName: \"kubernetes.io/projected/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-kube-api-access-9lwkd\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.867805 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.868338 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.868352 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:16 crc kubenswrapper[4681]: I1123 06:53:16.887742 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwkd\" (UniqueName: \"kubernetes.io/projected/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-kube-api-access-9lwkd\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:17 crc kubenswrapper[4681]: I1123 06:53:17.004892 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:17 crc kubenswrapper[4681]: I1123 06:53:17.381870 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2"] Nov 23 06:53:17 crc kubenswrapper[4681]: W1123 06:53:17.390354 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaef5cbcb_ad98_499a_99e6_d8d8ae08881c.slice/crio-821b5ec1510e565434ef060a904544245ccd310cc1c76eb489c032e62e3bf834 WatchSource:0}: Error finding container 821b5ec1510e565434ef060a904544245ccd310cc1c76eb489c032e62e3bf834: Status 404 returned error can't find the container with id 821b5ec1510e565434ef060a904544245ccd310cc1c76eb489c032e62e3bf834 Nov 23 06:53:17 crc kubenswrapper[4681]: I1123 06:53:17.753419 4681 generic.go:334] "Generic (PLEG): container finished" podID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerID="dbc5296df4692b922da2fe9ad67cf24647ccfea19c68bba7dd83e71f240ad8b4" exitCode=0 Nov 23 06:53:17 crc kubenswrapper[4681]: I1123 06:53:17.753477 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" event={"ID":"aef5cbcb-ad98-499a-99e6-d8d8ae08881c","Type":"ContainerDied","Data":"dbc5296df4692b922da2fe9ad67cf24647ccfea19c68bba7dd83e71f240ad8b4"} Nov 23 06:53:17 crc kubenswrapper[4681]: I1123 06:53:17.753503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" event={"ID":"aef5cbcb-ad98-499a-99e6-d8d8ae08881c","Type":"ContainerStarted","Data":"821b5ec1510e565434ef060a904544245ccd310cc1c76eb489c032e62e3bf834"} Nov 23 06:53:17 crc kubenswrapper[4681]: I1123 06:53:17.848918 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-btj59" Nov 23 06:53:19 crc kubenswrapper[4681]: I1123 06:53:19.777668 4681 generic.go:334] "Generic (PLEG): container finished" podID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerID="f962234034a61587ecaed73e42fe6813ed77195a8a373fb7d25714e0c9073d30" exitCode=0 Nov 23 06:53:19 crc kubenswrapper[4681]: I1123 06:53:19.777786 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" event={"ID":"aef5cbcb-ad98-499a-99e6-d8d8ae08881c","Type":"ContainerDied","Data":"f962234034a61587ecaed73e42fe6813ed77195a8a373fb7d25714e0c9073d30"} Nov 23 06:53:20 crc kubenswrapper[4681]: I1123 06:53:20.786880 4681 generic.go:334] "Generic (PLEG): container finished" podID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerID="63ad9bab022f325aae5416e339f818c8e9fad5b3c88ea59e91e5bd3a7dfb1c3a" exitCode=0 Nov 23 06:53:20 crc kubenswrapper[4681]: I1123 06:53:20.786944 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" event={"ID":"aef5cbcb-ad98-499a-99e6-d8d8ae08881c","Type":"ContainerDied","Data":"63ad9bab022f325aae5416e339f818c8e9fad5b3c88ea59e91e5bd3a7dfb1c3a"} Nov 23 06:53:21 crc kubenswrapper[4681]: I1123 06:53:21.973166 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.136675 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-util\") pod \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.136766 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lwkd\" (UniqueName: \"kubernetes.io/projected/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-kube-api-access-9lwkd\") pod \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.136801 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-bundle\") pod \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\" (UID: \"aef5cbcb-ad98-499a-99e6-d8d8ae08881c\") " Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.137554 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-bundle" (OuterVolumeSpecName: "bundle") pod "aef5cbcb-ad98-499a-99e6-d8d8ae08881c" (UID: "aef5cbcb-ad98-499a-99e6-d8d8ae08881c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.143868 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-kube-api-access-9lwkd" (OuterVolumeSpecName: "kube-api-access-9lwkd") pod "aef5cbcb-ad98-499a-99e6-d8d8ae08881c" (UID: "aef5cbcb-ad98-499a-99e6-d8d8ae08881c"). InnerVolumeSpecName "kube-api-access-9lwkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.148568 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-util" (OuterVolumeSpecName: "util") pod "aef5cbcb-ad98-499a-99e6-d8d8ae08881c" (UID: "aef5cbcb-ad98-499a-99e6-d8d8ae08881c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.238180 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.238206 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-util\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.238217 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lwkd\" (UniqueName: \"kubernetes.io/projected/aef5cbcb-ad98-499a-99e6-d8d8ae08881c-kube-api-access-9lwkd\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.799322 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" event={"ID":"aef5cbcb-ad98-499a-99e6-d8d8ae08881c","Type":"ContainerDied","Data":"821b5ec1510e565434ef060a904544245ccd310cc1c76eb489c032e62e3bf834"} Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.799373 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821b5ec1510e565434ef060a904544245ccd310cc1c76eb489c032e62e3bf834" Nov 23 06:53:22 crc kubenswrapper[4681]: I1123 06:53:22.799447 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edntd2" Nov 23 06:53:23 crc kubenswrapper[4681]: I1123 06:53:23.362752 4681 scope.go:117] "RemoveContainer" containerID="3ee984309fa8ce33e23cdf6fc6b644a32685973fac9472dd105a0d6e45df0b48" Nov 23 06:53:23 crc kubenswrapper[4681]: I1123 06:53:23.377939 4681 scope.go:117] "RemoveContainer" containerID="85fe493c1777c5f063e67eac13f4c3417da679d1376c258907c8008b544bdbb4" Nov 23 06:53:23 crc kubenswrapper[4681]: I1123 06:53:23.807419 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2lhx5_4094b291-8b0b-43c0-96e9-f08a9ef53c8b/kube-multus/2.log" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.172115 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vqgb7"] Nov 23 06:53:24 crc kubenswrapper[4681]: E1123 06:53:24.172560 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerName="extract" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.172646 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerName="extract" Nov 23 06:53:24 crc kubenswrapper[4681]: E1123 06:53:24.172705 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerName="pull" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.172755 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerName="pull" Nov 23 06:53:24 crc kubenswrapper[4681]: E1123 06:53:24.172807 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerName="util" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.172852 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerName="util" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.172992 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef5cbcb-ad98-499a-99e6-d8d8ae08881c" containerName="extract" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.173424 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.175569 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-v9xjx" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.175793 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.176235 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.189397 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vqgb7"] Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.260217 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nfg\" (UniqueName: \"kubernetes.io/projected/a1b40a4b-a20d-4184-9f61-b99e58bf9645-kube-api-access-c9nfg\") pod \"nmstate-operator-557fdffb88-vqgb7\" (UID: \"a1b40a4b-a20d-4184-9f61-b99e58bf9645\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.361308 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9nfg\" (UniqueName: \"kubernetes.io/projected/a1b40a4b-a20d-4184-9f61-b99e58bf9645-kube-api-access-c9nfg\") pod \"nmstate-operator-557fdffb88-vqgb7\" (UID: \"a1b40a4b-a20d-4184-9f61-b99e58bf9645\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.377516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9nfg\" (UniqueName: \"kubernetes.io/projected/a1b40a4b-a20d-4184-9f61-b99e58bf9645-kube-api-access-c9nfg\") pod \"nmstate-operator-557fdffb88-vqgb7\" (UID: \"a1b40a4b-a20d-4184-9f61-b99e58bf9645\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.486581 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.747177 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vqgb7"] Nov 23 06:53:24 crc kubenswrapper[4681]: W1123 06:53:24.756150 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1b40a4b_a20d_4184_9f61_b99e58bf9645.slice/crio-893d48e56a55e41994ac017f8c3df7daab799d1ae4eee090cbc445cd9971a5ce WatchSource:0}: Error finding container 893d48e56a55e41994ac017f8c3df7daab799d1ae4eee090cbc445cd9971a5ce: Status 404 returned error can't find the container with id 893d48e56a55e41994ac017f8c3df7daab799d1ae4eee090cbc445cd9971a5ce Nov 23 06:53:24 crc kubenswrapper[4681]: I1123 06:53:24.816090 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" event={"ID":"a1b40a4b-a20d-4184-9f61-b99e58bf9645","Type":"ContainerStarted","Data":"893d48e56a55e41994ac017f8c3df7daab799d1ae4eee090cbc445cd9971a5ce"} Nov 23 06:53:27 crc kubenswrapper[4681]: I1123 06:53:27.838256 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" event={"ID":"a1b40a4b-a20d-4184-9f61-b99e58bf9645","Type":"ContainerStarted","Data":"2fd26dd784e0fd9504a0515e50c75b5a838bfd9e0a4858d08ba0dfbb2f5e3f1a"} Nov 23 06:53:27 crc kubenswrapper[4681]: I1123 06:53:27.855319 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-vqgb7" podStartSLOduration=1.664185166 podStartE2EDuration="3.855290777s" podCreationTimestamp="2025-11-23 06:53:24 +0000 UTC" firstStartedPulling="2025-11-23 06:53:24.759157079 +0000 UTC m=+541.828666316" lastFinishedPulling="2025-11-23 06:53:26.950262691 +0000 UTC m=+544.019771927" observedRunningTime="2025-11-23 06:53:27.853661576 +0000 UTC m=+544.923170813" watchObservedRunningTime="2025-11-23 06:53:27.855290777 +0000 UTC m=+544.924800014" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.670269 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc"] Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.671176 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.673157 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zj6sd" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.700502 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw"] Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.700951 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.707926 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.724770 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc"] Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.727053 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgml5\" (UniqueName: \"kubernetes.io/projected/a709e862-b0a1-4bb5-a9cc-b218af164981-kube-api-access-zgml5\") pod \"nmstate-webhook-6b89b748d8-jdtbw\" (UID: \"a709e862-b0a1-4bb5-a9cc-b218af164981\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.727093 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a709e862-b0a1-4bb5-a9cc-b218af164981-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-jdtbw\" (UID: \"a709e862-b0a1-4bb5-a9cc-b218af164981\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.727142 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4cs\" (UniqueName: \"kubernetes.io/projected/2e211d82-8d38-434c-bf7c-40afe485e021-kube-api-access-2h4cs\") pod \"nmstate-metrics-5dcf9c57c5-pmnfc\" (UID: \"2e211d82-8d38-434c-bf7c-40afe485e021\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.728432 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw"] Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.731829 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-799ww"] Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.732839 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.816424 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49"] Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.817216 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.818965 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tftzg" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.819013 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.819361 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.822779 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49"] Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828258 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-dbus-socket\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828470 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgml5\" (UniqueName: \"kubernetes.io/projected/a709e862-b0a1-4bb5-a9cc-b218af164981-kube-api-access-zgml5\") pod \"nmstate-webhook-6b89b748d8-jdtbw\" (UID: \"a709e862-b0a1-4bb5-a9cc-b218af164981\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828527 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a709e862-b0a1-4bb5-a9cc-b218af164981-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-jdtbw\" (UID: \"a709e862-b0a1-4bb5-a9cc-b218af164981\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828567 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4cs\" (UniqueName: \"kubernetes.io/projected/2e211d82-8d38-434c-bf7c-40afe485e021-kube-api-access-2h4cs\") pod \"nmstate-metrics-5dcf9c57c5-pmnfc\" (UID: \"2e211d82-8d38-434c-bf7c-40afe485e021\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828632 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6tmb\" (UniqueName: \"kubernetes.io/projected/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-kube-api-access-p6tmb\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828672 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: E1123 06:53:28.828703 4681 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828709 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: E1123 06:53:28.828789 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a709e862-b0a1-4bb5-a9cc-b218af164981-tls-key-pair podName:a709e862-b0a1-4bb5-a9cc-b218af164981 nodeName:}" failed. No retries permitted until 2025-11-23 06:53:29.328755982 +0000 UTC m=+546.398265220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a709e862-b0a1-4bb5-a9cc-b218af164981-tls-key-pair") pod "nmstate-webhook-6b89b748d8-jdtbw" (UID: "a709e862-b0a1-4bb5-a9cc-b218af164981") : secret "openshift-nmstate-webhook" not found Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.828949 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-nmstate-lock\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.829015 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jxgm\" (UniqueName: \"kubernetes.io/projected/80010a7d-0bae-4fae-8e99-6ad41057e134-kube-api-access-5jxgm\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.829141 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-ovs-socket\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.856875 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4cs\" (UniqueName: \"kubernetes.io/projected/2e211d82-8d38-434c-bf7c-40afe485e021-kube-api-access-2h4cs\") pod \"nmstate-metrics-5dcf9c57c5-pmnfc\" (UID: \"2e211d82-8d38-434c-bf7c-40afe485e021\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.859623 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgml5\" (UniqueName: \"kubernetes.io/projected/a709e862-b0a1-4bb5-a9cc-b218af164981-kube-api-access-zgml5\") pod \"nmstate-webhook-6b89b748d8-jdtbw\" (UID: \"a709e862-b0a1-4bb5-a9cc-b218af164981\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930568 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-dbus-socket\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930661 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6tmb\" (UniqueName: \"kubernetes.io/projected/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-kube-api-access-p6tmb\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930687 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930720 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930745 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-nmstate-lock\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930764 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jxgm\" (UniqueName: \"kubernetes.io/projected/80010a7d-0bae-4fae-8e99-6ad41057e134-kube-api-access-5jxgm\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930800 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-ovs-socket\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.930888 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-ovs-socket\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.931532 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-dbus-socket\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.931959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/80010a7d-0bae-4fae-8e99-6ad41057e134-nmstate-lock\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.932936 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.934892 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.951345 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6tmb\" (UniqueName: \"kubernetes.io/projected/8dd948d5-dd7b-4060-8789-17c2ea1ed7f7-kube-api-access-p6tmb\") pod \"nmstate-console-plugin-5874bd7bc5-99p49\" (UID: \"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.952498 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jxgm\" (UniqueName: \"kubernetes.io/projected/80010a7d-0bae-4fae-8e99-6ad41057e134-kube-api-access-5jxgm\") pod \"nmstate-handler-799ww\" (UID: \"80010a7d-0bae-4fae-8e99-6ad41057e134\") " pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:28 crc kubenswrapper[4681]: I1123 06:53:28.985864 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.032134 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79b7878cc7-ldp8f"] Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.033223 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.043716 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.050298 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79b7878cc7-ldp8f"] Nov 23 06:53:29 crc kubenswrapper[4681]: W1123 06:53:29.077872 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80010a7d_0bae_4fae_8e99_6ad41057e134.slice/crio-fbd15efaa3063fb4700c69c89baba856ceb25f48a38ad6c5d5e23f5174586979 WatchSource:0}: Error finding container fbd15efaa3063fb4700c69c89baba856ceb25f48a38ad6c5d5e23f5174586979: Status 404 returned error can't find the container with id fbd15efaa3063fb4700c69c89baba856ceb25f48a38ad6c5d5e23f5174586979 Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.130433 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.133574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-service-ca\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.133659 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-trusted-ca-bundle\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.133688 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-oauth-serving-cert\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.133815 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6phd\" (UniqueName: \"kubernetes.io/projected/e709b66d-c8da-453c-a144-347f9414025b-kube-api-access-b6phd\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.133889 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e709b66d-c8da-453c-a144-347f9414025b-console-oauth-config\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.134073 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e709b66d-c8da-453c-a144-347f9414025b-console-serving-cert\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.135354 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-console-config\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.237083 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e709b66d-c8da-453c-a144-347f9414025b-console-serving-cert\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.237165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-console-config\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.237194 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-service-ca\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.237226 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-trusted-ca-bundle\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.237456 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-oauth-serving-cert\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.237564 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6phd\" (UniqueName: \"kubernetes.io/projected/e709b66d-c8da-453c-a144-347f9414025b-kube-api-access-b6phd\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.237624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e709b66d-c8da-453c-a144-347f9414025b-console-oauth-config\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.238540 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-service-ca\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.238710 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-console-config\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.239088 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-trusted-ca-bundle\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.239124 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e709b66d-c8da-453c-a144-347f9414025b-oauth-serving-cert\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.242910 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e709b66d-c8da-453c-a144-347f9414025b-console-serving-cert\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.248568 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e709b66d-c8da-453c-a144-347f9414025b-console-oauth-config\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.262158 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6phd\" (UniqueName: \"kubernetes.io/projected/e709b66d-c8da-453c-a144-347f9414025b-kube-api-access-b6phd\") pod \"console-79b7878cc7-ldp8f\" (UID: \"e709b66d-c8da-453c-a144-347f9414025b\") " pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.315568 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49"] Nov 23 06:53:29 crc kubenswrapper[4681]: W1123 06:53:29.321120 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dd948d5_dd7b_4060_8789_17c2ea1ed7f7.slice/crio-ff56bcf4e20176c7b08e14c899e825a165a8b268a7ad640958b965d60ca24d1b WatchSource:0}: Error finding container ff56bcf4e20176c7b08e14c899e825a165a8b268a7ad640958b965d60ca24d1b: Status 404 returned error can't find the container with id ff56bcf4e20176c7b08e14c899e825a165a8b268a7ad640958b965d60ca24d1b Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.338400 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a709e862-b0a1-4bb5-a9cc-b218af164981-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-jdtbw\" (UID: \"a709e862-b0a1-4bb5-a9cc-b218af164981\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.341830 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a709e862-b0a1-4bb5-a9cc-b218af164981-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-jdtbw\" (UID: \"a709e862-b0a1-4bb5-a9cc-b218af164981\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.345599 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.410982 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc"] Nov 23 06:53:29 crc kubenswrapper[4681]: W1123 06:53:29.426979 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e211d82_8d38_434c_bf7c_40afe485e021.slice/crio-8bd6e7b0b367a705c7188a2437fb7ffd37a714de103e1b42c32275f551e97d22 WatchSource:0}: Error finding container 8bd6e7b0b367a705c7188a2437fb7ffd37a714de103e1b42c32275f551e97d22: Status 404 returned error can't find the container with id 8bd6e7b0b367a705c7188a2437fb7ffd37a714de103e1b42c32275f551e97d22 Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.610958 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.711705 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79b7878cc7-ldp8f"] Nov 23 06:53:29 crc kubenswrapper[4681]: W1123 06:53:29.717087 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode709b66d_c8da_453c_a144_347f9414025b.slice/crio-d81e8e2ab5acf74dc172fd71d7ff8189cab4cc418c01c0afc2ddcd7a4b60626f WatchSource:0}: Error finding container d81e8e2ab5acf74dc172fd71d7ff8189cab4cc418c01c0afc2ddcd7a4b60626f: Status 404 returned error can't find the container with id d81e8e2ab5acf74dc172fd71d7ff8189cab4cc418c01c0afc2ddcd7a4b60626f Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.853229 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" event={"ID":"2e211d82-8d38-434c-bf7c-40afe485e021","Type":"ContainerStarted","Data":"8bd6e7b0b367a705c7188a2437fb7ffd37a714de103e1b42c32275f551e97d22"} Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.854291 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-799ww" event={"ID":"80010a7d-0bae-4fae-8e99-6ad41057e134","Type":"ContainerStarted","Data":"fbd15efaa3063fb4700c69c89baba856ceb25f48a38ad6c5d5e23f5174586979"} Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.855574 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b7878cc7-ldp8f" event={"ID":"e709b66d-c8da-453c-a144-347f9414025b","Type":"ContainerStarted","Data":"b0e38fa81341e40483299746c92e877c7e4d94c136cac1ebc83b5ff49d1e7288"} Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.855604 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b7878cc7-ldp8f" event={"ID":"e709b66d-c8da-453c-a144-347f9414025b","Type":"ContainerStarted","Data":"d81e8e2ab5acf74dc172fd71d7ff8189cab4cc418c01c0afc2ddcd7a4b60626f"} Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.856872 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" event={"ID":"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7","Type":"ContainerStarted","Data":"ff56bcf4e20176c7b08e14c899e825a165a8b268a7ad640958b965d60ca24d1b"} Nov 23 06:53:29 crc kubenswrapper[4681]: I1123 06:53:29.875353 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79b7878cc7-ldp8f" podStartSLOduration=0.875325747 podStartE2EDuration="875.325747ms" podCreationTimestamp="2025-11-23 06:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:53:29.871536423 +0000 UTC m=+546.941045660" watchObservedRunningTime="2025-11-23 06:53:29.875325747 +0000 UTC m=+546.944834984" Nov 23 06:53:30 crc kubenswrapper[4681]: I1123 06:53:30.009938 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw"] Nov 23 06:53:30 crc kubenswrapper[4681]: W1123 06:53:30.017664 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda709e862_b0a1_4bb5_a9cc_b218af164981.slice/crio-ff1293a85c6a63c10f9c115fa40c3b99aa6c4afc5586a03a8fbee3a583f1a8bd WatchSource:0}: Error finding container ff1293a85c6a63c10f9c115fa40c3b99aa6c4afc5586a03a8fbee3a583f1a8bd: Status 404 returned error can't find the container with id ff1293a85c6a63c10f9c115fa40c3b99aa6c4afc5586a03a8fbee3a583f1a8bd Nov 23 06:53:30 crc kubenswrapper[4681]: I1123 06:53:30.865699 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" event={"ID":"a709e862-b0a1-4bb5-a9cc-b218af164981","Type":"ContainerStarted","Data":"ff1293a85c6a63c10f9c115fa40c3b99aa6c4afc5586a03a8fbee3a583f1a8bd"} Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.877841 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" event={"ID":"2e211d82-8d38-434c-bf7c-40afe485e021","Type":"ContainerStarted","Data":"99f429e548a925334c0fced3730cb303006d921ee65ca4f23f09570bec10df4d"} Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.879437 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" event={"ID":"a709e862-b0a1-4bb5-a9cc-b218af164981","Type":"ContainerStarted","Data":"315829aa29e71727d054b8b221f27e76b89072107a382ec50cf2dfe7568737f1"} Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.879749 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.881766 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-799ww" event={"ID":"80010a7d-0bae-4fae-8e99-6ad41057e134","Type":"ContainerStarted","Data":"56bc46acc0307c745a9378cf2df3c52308b77906c88a299e22d1efc92b2e08c3"} Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.881882 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.888000 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" event={"ID":"8dd948d5-dd7b-4060-8789-17c2ea1ed7f7","Type":"ContainerStarted","Data":"1b4ef0b11bfabc54681ac21d8d5c2ff7c04ecfb64cba419a330d3fef2dd6f79b"} Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.895717 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" podStartSLOduration=2.82813509 podStartE2EDuration="4.89569933s" podCreationTimestamp="2025-11-23 06:53:28 +0000 UTC" firstStartedPulling="2025-11-23 06:53:30.022130229 +0000 UTC m=+547.091639466" lastFinishedPulling="2025-11-23 06:53:32.089694469 +0000 UTC m=+549.159203706" observedRunningTime="2025-11-23 06:53:32.894555153 +0000 UTC m=+549.964064390" watchObservedRunningTime="2025-11-23 06:53:32.89569933 +0000 UTC m=+549.965208567" Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.924930 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-799ww" podStartSLOduration=1.9151958869999999 podStartE2EDuration="4.924900785s" podCreationTimestamp="2025-11-23 06:53:28 +0000 UTC" firstStartedPulling="2025-11-23 06:53:29.081239598 +0000 UTC m=+546.150748835" lastFinishedPulling="2025-11-23 06:53:32.090944496 +0000 UTC m=+549.160453733" observedRunningTime="2025-11-23 06:53:32.911744992 +0000 UTC m=+549.981254239" watchObservedRunningTime="2025-11-23 06:53:32.924900785 +0000 UTC m=+549.994410022" Nov 23 06:53:32 crc kubenswrapper[4681]: I1123 06:53:32.928376 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-99p49" podStartSLOduration=2.169022836 podStartE2EDuration="4.928364586s" podCreationTimestamp="2025-11-23 06:53:28 +0000 UTC" firstStartedPulling="2025-11-23 06:53:29.32336753 +0000 UTC m=+546.392876767" lastFinishedPulling="2025-11-23 06:53:32.08270928 +0000 UTC m=+549.152218517" observedRunningTime="2025-11-23 06:53:32.922753817 +0000 UTC m=+549.992263054" watchObservedRunningTime="2025-11-23 06:53:32.928364586 +0000 UTC m=+549.997873823" Nov 23 06:53:34 crc kubenswrapper[4681]: I1123 06:53:34.914450 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" event={"ID":"2e211d82-8d38-434c-bf7c-40afe485e021","Type":"ContainerStarted","Data":"c6a65d45de0cfde373f746b5770fddc1bfb521417bc96324fbf98365adbcb1ba"} Nov 23 06:53:34 crc kubenswrapper[4681]: I1123 06:53:34.932119 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-pmnfc" podStartSLOduration=2.160595699 podStartE2EDuration="6.932092623s" podCreationTimestamp="2025-11-23 06:53:28 +0000 UTC" firstStartedPulling="2025-11-23 06:53:29.432377936 +0000 UTC m=+546.501887162" lastFinishedPulling="2025-11-23 06:53:34.203874849 +0000 UTC m=+551.273384086" observedRunningTime="2025-11-23 06:53:34.92857546 +0000 UTC m=+551.998084698" watchObservedRunningTime="2025-11-23 06:53:34.932092623 +0000 UTC m=+552.001601859" Nov 23 06:53:39 crc kubenswrapper[4681]: I1123 06:53:39.068391 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-799ww" Nov 23 06:53:39 crc kubenswrapper[4681]: I1123 06:53:39.346613 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:39 crc kubenswrapper[4681]: I1123 06:53:39.346681 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:39 crc kubenswrapper[4681]: I1123 06:53:39.351522 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:39 crc kubenswrapper[4681]: I1123 06:53:39.946075 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79b7878cc7-ldp8f" Nov 23 06:53:39 crc kubenswrapper[4681]: I1123 06:53:39.980878 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-59rqt"] Nov 23 06:53:42 crc kubenswrapper[4681]: I1123 06:53:42.295497 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:53:42 crc kubenswrapper[4681]: I1123 06:53:42.295888 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:53:49 crc kubenswrapper[4681]: I1123 06:53:49.618453 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-jdtbw" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.440063 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh"] Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.441512 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.444792 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.450255 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh"] Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.617209 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.617278 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.617309 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rqfp\" (UniqueName: \"kubernetes.io/projected/7f32437c-4004-462c-8d15-3c024b54e773-kube-api-access-4rqfp\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.717831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rqfp\" (UniqueName: \"kubernetes.io/projected/7f32437c-4004-462c-8d15-3c024b54e773-kube-api-access-4rqfp\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.717919 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.717964 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.718361 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.718598 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.734221 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rqfp\" (UniqueName: \"kubernetes.io/projected/7f32437c-4004-462c-8d15-3c024b54e773-kube-api-access-4rqfp\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:00 crc kubenswrapper[4681]: I1123 06:54:00.758975 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:01 crc kubenswrapper[4681]: I1123 06:54:01.137882 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh"] Nov 23 06:54:02 crc kubenswrapper[4681]: I1123 06:54:02.080844 4681 generic.go:334] "Generic (PLEG): container finished" podID="7f32437c-4004-462c-8d15-3c024b54e773" containerID="07319eea49ba99a16edf6ef9b9c7cf22f12ba372ca6f8fc7131c045e9c4b45cb" exitCode=0 Nov 23 06:54:02 crc kubenswrapper[4681]: I1123 06:54:02.080882 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" event={"ID":"7f32437c-4004-462c-8d15-3c024b54e773","Type":"ContainerDied","Data":"07319eea49ba99a16edf6ef9b9c7cf22f12ba372ca6f8fc7131c045e9c4b45cb"} Nov 23 06:54:02 crc kubenswrapper[4681]: I1123 06:54:02.081114 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" event={"ID":"7f32437c-4004-462c-8d15-3c024b54e773","Type":"ContainerStarted","Data":"7e1e1d8c48f3eb22fc7730d690cecacb515eea6b6ac9f24e1e999d2c19209af6"} Nov 23 06:54:04 crc kubenswrapper[4681]: I1123 06:54:04.093799 4681 generic.go:334] "Generic (PLEG): container finished" podID="7f32437c-4004-462c-8d15-3c024b54e773" containerID="0c1c9bc52654e50cd5dc9008ee9a62da7e3a7e1c2c572001373746c0a0074601" exitCode=0 Nov 23 06:54:04 crc kubenswrapper[4681]: I1123 06:54:04.093894 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" event={"ID":"7f32437c-4004-462c-8d15-3c024b54e773","Type":"ContainerDied","Data":"0c1c9bc52654e50cd5dc9008ee9a62da7e3a7e1c2c572001373746c0a0074601"} Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.018212 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-59rqt" podUID="c0e3f5d0-037c-48b9-888f-375c10e5f269" containerName="console" containerID="cri-o://c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f" gracePeriod=15 Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.101923 4681 generic.go:334] "Generic (PLEG): container finished" podID="7f32437c-4004-462c-8d15-3c024b54e773" containerID="92bf2d299a20d7d4e3dbdaa74ad58c0da8ad5aed4ecb815f396b3d4b25a3326a" exitCode=0 Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.101973 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" event={"ID":"7f32437c-4004-462c-8d15-3c024b54e773","Type":"ContainerDied","Data":"92bf2d299a20d7d4e3dbdaa74ad58c0da8ad5aed4ecb815f396b3d4b25a3326a"} Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.411483 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-59rqt_c0e3f5d0-037c-48b9-888f-375c10e5f269/console/0.log" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.412086 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.576361 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-oauth-serving-cert\") pod \"c0e3f5d0-037c-48b9-888f-375c10e5f269\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.576417 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-oauth-config\") pod \"c0e3f5d0-037c-48b9-888f-375c10e5f269\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.576499 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-serving-cert\") pod \"c0e3f5d0-037c-48b9-888f-375c10e5f269\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.576524 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmnt9\" (UniqueName: \"kubernetes.io/projected/c0e3f5d0-037c-48b9-888f-375c10e5f269-kube-api-access-hmnt9\") pod \"c0e3f5d0-037c-48b9-888f-375c10e5f269\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.576589 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-config\") pod \"c0e3f5d0-037c-48b9-888f-375c10e5f269\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.576617 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-service-ca\") pod \"c0e3f5d0-037c-48b9-888f-375c10e5f269\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.576645 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-trusted-ca-bundle\") pod \"c0e3f5d0-037c-48b9-888f-375c10e5f269\" (UID: \"c0e3f5d0-037c-48b9-888f-375c10e5f269\") " Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.577273 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c0e3f5d0-037c-48b9-888f-375c10e5f269" (UID: "c0e3f5d0-037c-48b9-888f-375c10e5f269"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.577334 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-service-ca" (OuterVolumeSpecName: "service-ca") pod "c0e3f5d0-037c-48b9-888f-375c10e5f269" (UID: "c0e3f5d0-037c-48b9-888f-375c10e5f269"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.577412 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-config" (OuterVolumeSpecName: "console-config") pod "c0e3f5d0-037c-48b9-888f-375c10e5f269" (UID: "c0e3f5d0-037c-48b9-888f-375c10e5f269"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.577771 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c0e3f5d0-037c-48b9-888f-375c10e5f269" (UID: "c0e3f5d0-037c-48b9-888f-375c10e5f269"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.584304 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e3f5d0-037c-48b9-888f-375c10e5f269-kube-api-access-hmnt9" (OuterVolumeSpecName: "kube-api-access-hmnt9") pod "c0e3f5d0-037c-48b9-888f-375c10e5f269" (UID: "c0e3f5d0-037c-48b9-888f-375c10e5f269"). InnerVolumeSpecName "kube-api-access-hmnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.584883 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c0e3f5d0-037c-48b9-888f-375c10e5f269" (UID: "c0e3f5d0-037c-48b9-888f-375c10e5f269"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.585198 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c0e3f5d0-037c-48b9-888f-375c10e5f269" (UID: "c0e3f5d0-037c-48b9-888f-375c10e5f269"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.678915 4681 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.678941 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmnt9\" (UniqueName: \"kubernetes.io/projected/c0e3f5d0-037c-48b9-888f-375c10e5f269-kube-api-access-hmnt9\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.678954 4681 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.678962 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.678970 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.678976 4681 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e3f5d0-037c-48b9-888f-375c10e5f269-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:05 crc kubenswrapper[4681]: I1123 06:54:05.678983 4681 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e3f5d0-037c-48b9-888f-375c10e5f269-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.108485 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-59rqt_c0e3f5d0-037c-48b9-888f-375c10e5f269/console/0.log" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.108543 4681 generic.go:334] "Generic (PLEG): container finished" podID="c0e3f5d0-037c-48b9-888f-375c10e5f269" containerID="c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f" exitCode=2 Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.108669 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-59rqt" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.108715 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-59rqt" event={"ID":"c0e3f5d0-037c-48b9-888f-375c10e5f269","Type":"ContainerDied","Data":"c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f"} Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.108748 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-59rqt" event={"ID":"c0e3f5d0-037c-48b9-888f-375c10e5f269","Type":"ContainerDied","Data":"c0f6b797d3bc8af3b8450d5bedb0e98b8f33289dd08b3450a8eb4e293c5117c7"} Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.108769 4681 scope.go:117] "RemoveContainer" containerID="c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.126699 4681 scope.go:117] "RemoveContainer" containerID="c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f" Nov 23 06:54:06 crc kubenswrapper[4681]: E1123 06:54:06.127115 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f\": container with ID starting with c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f not found: ID does not exist" containerID="c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.127218 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f"} err="failed to get container status \"c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f\": rpc error: code = NotFound desc = could not find container \"c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f\": container with ID starting with c222f1c74fdfb1547033c2fa0f48043d2402aaac915faeb14cdfe4281f2ea38f not found: ID does not exist" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.138523 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-59rqt"] Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.141774 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-59rqt"] Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.304196 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.487403 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-bundle\") pod \"7f32437c-4004-462c-8d15-3c024b54e773\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.487482 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rqfp\" (UniqueName: \"kubernetes.io/projected/7f32437c-4004-462c-8d15-3c024b54e773-kube-api-access-4rqfp\") pod \"7f32437c-4004-462c-8d15-3c024b54e773\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.487519 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-util\") pod \"7f32437c-4004-462c-8d15-3c024b54e773\" (UID: \"7f32437c-4004-462c-8d15-3c024b54e773\") " Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.488729 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-bundle" (OuterVolumeSpecName: "bundle") pod "7f32437c-4004-462c-8d15-3c024b54e773" (UID: "7f32437c-4004-462c-8d15-3c024b54e773"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.498502 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-util" (OuterVolumeSpecName: "util") pod "7f32437c-4004-462c-8d15-3c024b54e773" (UID: "7f32437c-4004-462c-8d15-3c024b54e773"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.504116 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f32437c-4004-462c-8d15-3c024b54e773-kube-api-access-4rqfp" (OuterVolumeSpecName: "kube-api-access-4rqfp") pod "7f32437c-4004-462c-8d15-3c024b54e773" (UID: "7f32437c-4004-462c-8d15-3c024b54e773"). InnerVolumeSpecName "kube-api-access-4rqfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.588819 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.588843 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rqfp\" (UniqueName: \"kubernetes.io/projected/7f32437c-4004-462c-8d15-3c024b54e773-kube-api-access-4rqfp\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:06 crc kubenswrapper[4681]: I1123 06:54:06.588854 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f32437c-4004-462c-8d15-3c024b54e773-util\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:07 crc kubenswrapper[4681]: I1123 06:54:07.115476 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" event={"ID":"7f32437c-4004-462c-8d15-3c024b54e773","Type":"ContainerDied","Data":"7e1e1d8c48f3eb22fc7730d690cecacb515eea6b6ac9f24e1e999d2c19209af6"} Nov 23 06:54:07 crc kubenswrapper[4681]: I1123 06:54:07.115846 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1e1d8c48f3eb22fc7730d690cecacb515eea6b6ac9f24e1e999d2c19209af6" Nov 23 06:54:07 crc kubenswrapper[4681]: I1123 06:54:07.115553 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hlprh" Nov 23 06:54:07 crc kubenswrapper[4681]: I1123 06:54:07.257062 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e3f5d0-037c-48b9-888f-375c10e5f269" path="/var/lib/kubelet/pods/c0e3f5d0-037c-48b9-888f-375c10e5f269/volumes" Nov 23 06:54:12 crc kubenswrapper[4681]: I1123 06:54:12.295914 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:54:12 crc kubenswrapper[4681]: I1123 06:54:12.296557 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:54:12 crc kubenswrapper[4681]: I1123 06:54:12.296610 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:54:12 crc kubenswrapper[4681]: I1123 06:54:12.297179 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fa8fec50b296212aef5b2ad5824bdfb0e0ff8b77199951e5391ad3ba5cad98c"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:54:12 crc kubenswrapper[4681]: I1123 06:54:12.297225 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://9fa8fec50b296212aef5b2ad5824bdfb0e0ff8b77199951e5391ad3ba5cad98c" gracePeriod=600 Nov 23 06:54:13 crc kubenswrapper[4681]: I1123 06:54:13.146475 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="9fa8fec50b296212aef5b2ad5824bdfb0e0ff8b77199951e5391ad3ba5cad98c" exitCode=0 Nov 23 06:54:13 crc kubenswrapper[4681]: I1123 06:54:13.146543 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"9fa8fec50b296212aef5b2ad5824bdfb0e0ff8b77199951e5391ad3ba5cad98c"} Nov 23 06:54:13 crc kubenswrapper[4681]: I1123 06:54:13.146802 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"2a5abade0c31450ea18cad45860310cd823c68e49534b39a64b21095b8821bf8"} Nov 23 06:54:13 crc kubenswrapper[4681]: I1123 06:54:13.146829 4681 scope.go:117] "RemoveContainer" containerID="2de53e8387551d77fba4dfb5cb5ce0f311e59b152a70840563ac4923aa86b283" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.890263 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg"] Nov 23 06:54:15 crc kubenswrapper[4681]: E1123 06:54:15.892539 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e3f5d0-037c-48b9-888f-375c10e5f269" containerName="console" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.892656 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e3f5d0-037c-48b9-888f-375c10e5f269" containerName="console" Nov 23 06:54:15 crc kubenswrapper[4681]: E1123 06:54:15.892721 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f32437c-4004-462c-8d15-3c024b54e773" containerName="extract" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.892776 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f32437c-4004-462c-8d15-3c024b54e773" containerName="extract" Nov 23 06:54:15 crc kubenswrapper[4681]: E1123 06:54:15.892843 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f32437c-4004-462c-8d15-3c024b54e773" containerName="util" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.892886 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f32437c-4004-462c-8d15-3c024b54e773" containerName="util" Nov 23 06:54:15 crc kubenswrapper[4681]: E1123 06:54:15.892930 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f32437c-4004-462c-8d15-3c024b54e773" containerName="pull" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.892974 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f32437c-4004-462c-8d15-3c024b54e773" containerName="pull" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.893173 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f32437c-4004-462c-8d15-3c024b54e773" containerName="extract" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.893236 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e3f5d0-037c-48b9-888f-375c10e5f269" containerName="console" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.893997 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.896649 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.896862 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.897081 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-vm5jr" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.897362 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.900897 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.917989 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06458189-9765-4546-b2d4-63d51861c11e-apiservice-cert\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.918046 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rhn\" (UniqueName: \"kubernetes.io/projected/06458189-9765-4546-b2d4-63d51861c11e-kube-api-access-k5rhn\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.918089 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06458189-9765-4546-b2d4-63d51861c11e-webhook-cert\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:15 crc kubenswrapper[4681]: I1123 06:54:15.925073 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg"] Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.019431 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06458189-9765-4546-b2d4-63d51861c11e-apiservice-cert\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.019523 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rhn\" (UniqueName: \"kubernetes.io/projected/06458189-9765-4546-b2d4-63d51861c11e-kube-api-access-k5rhn\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.019555 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06458189-9765-4546-b2d4-63d51861c11e-webhook-cert\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.034079 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06458189-9765-4546-b2d4-63d51861c11e-apiservice-cert\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.042960 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06458189-9765-4546-b2d4-63d51861c11e-webhook-cert\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.047042 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rhn\" (UniqueName: \"kubernetes.io/projected/06458189-9765-4546-b2d4-63d51861c11e-kube-api-access-k5rhn\") pod \"metallb-operator-controller-manager-68ccf796d5-s8nfg\" (UID: \"06458189-9765-4546-b2d4-63d51861c11e\") " pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.183825 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k"] Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.184506 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.187377 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.187667 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-556z5" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.187821 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.199192 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k"] Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.208730 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.223302 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-apiservice-cert\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.223932 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-webhook-cert\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.223994 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzp8c\" (UniqueName: \"kubernetes.io/projected/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-kube-api-access-zzp8c\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.327071 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-apiservice-cert\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.327317 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-webhook-cert\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.327401 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzp8c\" (UniqueName: \"kubernetes.io/projected/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-kube-api-access-zzp8c\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.336040 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-apiservice-cert\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.336573 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-webhook-cert\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.340404 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzp8c\" (UniqueName: \"kubernetes.io/projected/fda5ab3e-b04c-4930-be6d-6bcae4c94c84-kube-api-access-zzp8c\") pod \"metallb-operator-webhook-server-d9fdc5f89-ftv8k\" (UID: \"fda5ab3e-b04c-4930-be6d-6bcae4c94c84\") " pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.496955 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.648516 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg"] Nov 23 06:54:16 crc kubenswrapper[4681]: I1123 06:54:16.912768 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k"] Nov 23 06:54:16 crc kubenswrapper[4681]: W1123 06:54:16.921429 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda5ab3e_b04c_4930_be6d_6bcae4c94c84.slice/crio-b5e8775ae2b0142376d5cd5dbfc924568549682967ee65190846b3e0244ab157 WatchSource:0}: Error finding container b5e8775ae2b0142376d5cd5dbfc924568549682967ee65190846b3e0244ab157: Status 404 returned error can't find the container with id b5e8775ae2b0142376d5cd5dbfc924568549682967ee65190846b3e0244ab157 Nov 23 06:54:17 crc kubenswrapper[4681]: I1123 06:54:17.171308 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" event={"ID":"06458189-9765-4546-b2d4-63d51861c11e","Type":"ContainerStarted","Data":"c9d61cd8c4893b0fb19bb72da2ea18d183d302f08940b9bf22832b116136a9e1"} Nov 23 06:54:17 crc kubenswrapper[4681]: I1123 06:54:17.172793 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" event={"ID":"fda5ab3e-b04c-4930-be6d-6bcae4c94c84","Type":"ContainerStarted","Data":"b5e8775ae2b0142376d5cd5dbfc924568549682967ee65190846b3e0244ab157"} Nov 23 06:54:20 crc kubenswrapper[4681]: I1123 06:54:20.195204 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" event={"ID":"06458189-9765-4546-b2d4-63d51861c11e","Type":"ContainerStarted","Data":"a03c719a6b58f4db80791d7c148a0c968ca41ea36b213cc19c8e9a98e30524be"} Nov 23 06:54:20 crc kubenswrapper[4681]: I1123 06:54:20.195772 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:22 crc kubenswrapper[4681]: I1123 06:54:22.207570 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" event={"ID":"fda5ab3e-b04c-4930-be6d-6bcae4c94c84","Type":"ContainerStarted","Data":"d978fb8513c8bc9a5d7599ef70a916bce447a2db6eb06108a46496cdac87a1a5"} Nov 23 06:54:22 crc kubenswrapper[4681]: I1123 06:54:22.208274 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:22 crc kubenswrapper[4681]: I1123 06:54:22.236295 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" podStartSLOduration=4.241226199 podStartE2EDuration="7.236280965s" podCreationTimestamp="2025-11-23 06:54:15 +0000 UTC" firstStartedPulling="2025-11-23 06:54:16.657991089 +0000 UTC m=+593.727500327" lastFinishedPulling="2025-11-23 06:54:19.653045856 +0000 UTC m=+596.722555093" observedRunningTime="2025-11-23 06:54:20.216444439 +0000 UTC m=+597.285953676" watchObservedRunningTime="2025-11-23 06:54:22.236280965 +0000 UTC m=+599.305790201" Nov 23 06:54:22 crc kubenswrapper[4681]: I1123 06:54:22.237897 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" podStartSLOduration=1.953591634 podStartE2EDuration="6.237890078s" podCreationTimestamp="2025-11-23 06:54:16 +0000 UTC" firstStartedPulling="2025-11-23 06:54:16.923760668 +0000 UTC m=+593.993269906" lastFinishedPulling="2025-11-23 06:54:21.208059113 +0000 UTC m=+598.277568350" observedRunningTime="2025-11-23 06:54:22.23029577 +0000 UTC m=+599.299805007" watchObservedRunningTime="2025-11-23 06:54:22.237890078 +0000 UTC m=+599.307399314" Nov 23 06:54:36 crc kubenswrapper[4681]: I1123 06:54:36.501003 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d9fdc5f89-ftv8k" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.212269 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68ccf796d5-s8nfg" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.766412 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gpxj5"] Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.769641 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.774667 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.774973 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-pztpw" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.785087 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-fngwt"] Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.785792 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.790035 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.790319 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.805220 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-fngwt"] Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.869365 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-np2rx"] Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.870289 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-np2rx" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.872344 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.872345 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.872489 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.873430 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gflm7" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.889500 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-8fdmk"] Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.890575 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.895199 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.901036 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-8fdmk"] Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.901987 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j9j7\" (UniqueName: \"kubernetes.io/projected/130f7428-4773-40f6-bb7c-1ea171ee3c1a-kube-api-access-6j9j7\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902040 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-reloader\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902167 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-conf\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902226 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6mgj\" (UniqueName: \"kubernetes.io/projected/25aeb7e3-74cf-4e18-8922-1a6fcb370858-kube-api-access-m6mgj\") pod \"frr-k8s-webhook-server-6998585d5-fngwt\" (UID: \"25aeb7e3-74cf-4e18-8922-1a6fcb370858\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902255 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-sockets\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902308 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25aeb7e3-74cf-4e18-8922-1a6fcb370858-cert\") pod \"frr-k8s-webhook-server-6998585d5-fngwt\" (UID: \"25aeb7e3-74cf-4e18-8922-1a6fcb370858\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-startup\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902769 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-metrics\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:56 crc kubenswrapper[4681]: I1123 06:54:56.902807 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/130f7428-4773-40f6-bb7c-1ea171ee3c1a-metrics-certs\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.003821 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j9j7\" (UniqueName: \"kubernetes.io/projected/130f7428-4773-40f6-bb7c-1ea171ee3c1a-kube-api-access-6j9j7\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.003863 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-reloader\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.003917 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpb5\" (UniqueName: \"kubernetes.io/projected/68453a26-a89c-4911-bceb-6daceb37c320-kube-api-access-xkpb5\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.003944 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metrics-certs\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004362 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-conf\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004308 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-reloader\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004453 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metallb-excludel2\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004512 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mgj\" (UniqueName: \"kubernetes.io/projected/25aeb7e3-74cf-4e18-8922-1a6fcb370858-kube-api-access-m6mgj\") pod \"frr-k8s-webhook-server-6998585d5-fngwt\" (UID: \"25aeb7e3-74cf-4e18-8922-1a6fcb370858\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004577 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-memberlist\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004597 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-sockets\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004823 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-conf\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.004982 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25aeb7e3-74cf-4e18-8922-1a6fcb370858-cert\") pod \"frr-k8s-webhook-server-6998585d5-fngwt\" (UID: \"25aeb7e3-74cf-4e18-8922-1a6fcb370858\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.005077 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-sockets\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.005911 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-startup\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.005943 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-metrics\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.005971 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68453a26-a89c-4911-bceb-6daceb37c320-cert\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.005993 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68453a26-a89c-4911-bceb-6daceb37c320-metrics-certs\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.006018 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szk6j\" (UniqueName: \"kubernetes.io/projected/a740d8dc-1173-4dce-aee9-a27d619cbf9e-kube-api-access-szk6j\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.006064 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/130f7428-4773-40f6-bb7c-1ea171ee3c1a-metrics-certs\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.006243 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/130f7428-4773-40f6-bb7c-1ea171ee3c1a-metrics\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.006889 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/130f7428-4773-40f6-bb7c-1ea171ee3c1a-frr-startup\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.019393 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25aeb7e3-74cf-4e18-8922-1a6fcb370858-cert\") pod \"frr-k8s-webhook-server-6998585d5-fngwt\" (UID: \"25aeb7e3-74cf-4e18-8922-1a6fcb370858\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.021929 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/130f7428-4773-40f6-bb7c-1ea171ee3c1a-metrics-certs\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.022515 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j9j7\" (UniqueName: \"kubernetes.io/projected/130f7428-4773-40f6-bb7c-1ea171ee3c1a-kube-api-access-6j9j7\") pod \"frr-k8s-gpxj5\" (UID: \"130f7428-4773-40f6-bb7c-1ea171ee3c1a\") " pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.028039 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mgj\" (UniqueName: \"kubernetes.io/projected/25aeb7e3-74cf-4e18-8922-1a6fcb370858-kube-api-access-m6mgj\") pod \"frr-k8s-webhook-server-6998585d5-fngwt\" (UID: \"25aeb7e3-74cf-4e18-8922-1a6fcb370858\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.094575 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.107693 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.107802 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68453a26-a89c-4911-bceb-6daceb37c320-cert\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.108965 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68453a26-a89c-4911-bceb-6daceb37c320-metrics-certs\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.109920 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szk6j\" (UniqueName: \"kubernetes.io/projected/a740d8dc-1173-4dce-aee9-a27d619cbf9e-kube-api-access-szk6j\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.110994 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpb5\" (UniqueName: \"kubernetes.io/projected/68453a26-a89c-4911-bceb-6daceb37c320-kube-api-access-xkpb5\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.111085 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metrics-certs\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.111191 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metallb-excludel2\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.111268 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-memberlist\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.109948 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 06:54:57 crc kubenswrapper[4681]: E1123 06:54:57.111452 4681 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 06:54:57 crc kubenswrapper[4681]: E1123 06:54:57.112020 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-memberlist podName:a740d8dc-1173-4dce-aee9-a27d619cbf9e nodeName:}" failed. No retries permitted until 2025-11-23 06:54:57.61199064 +0000 UTC m=+634.681499877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-memberlist") pod "speaker-np2rx" (UID: "a740d8dc-1173-4dce-aee9-a27d619cbf9e") : secret "metallb-memberlist" not found Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.112326 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metallb-excludel2\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: E1123 06:54:57.112414 4681 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 23 06:54:57 crc kubenswrapper[4681]: E1123 06:54:57.112483 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metrics-certs podName:a740d8dc-1173-4dce-aee9-a27d619cbf9e nodeName:}" failed. No retries permitted until 2025-11-23 06:54:57.612451519 +0000 UTC m=+634.681960757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metrics-certs") pod "speaker-np2rx" (UID: "a740d8dc-1173-4dce-aee9-a27d619cbf9e") : secret "speaker-certs-secret" not found Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.123898 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68453a26-a89c-4911-bceb-6daceb37c320-metrics-certs\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.132396 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68453a26-a89c-4911-bceb-6daceb37c320-cert\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.134485 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szk6j\" (UniqueName: \"kubernetes.io/projected/a740d8dc-1173-4dce-aee9-a27d619cbf9e-kube-api-access-szk6j\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.147722 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkpb5\" (UniqueName: \"kubernetes.io/projected/68453a26-a89c-4911-bceb-6daceb37c320-kube-api-access-xkpb5\") pod \"controller-6c7b4b5f48-8fdmk\" (UID: \"68453a26-a89c-4911-bceb-6daceb37c320\") " pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.209237 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.406063 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerStarted","Data":"8a501938727b335a082d9d7d84622c9755ad7f6e1379976cb90117aaa46800c6"} Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.411968 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-8fdmk"] Nov 23 06:54:57 crc kubenswrapper[4681]: W1123 06:54:57.423423 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68453a26_a89c_4911_bceb_6daceb37c320.slice/crio-9d5ffc7dddf1a33789f5e2a92723255c80800b949d319dbd25ea7bb2d08e7720 WatchSource:0}: Error finding container 9d5ffc7dddf1a33789f5e2a92723255c80800b949d319dbd25ea7bb2d08e7720: Status 404 returned error can't find the container with id 9d5ffc7dddf1a33789f5e2a92723255c80800b949d319dbd25ea7bb2d08e7720 Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.528665 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-fngwt"] Nov 23 06:54:57 crc kubenswrapper[4681]: W1123 06:54:57.534895 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25aeb7e3_74cf_4e18_8922_1a6fcb370858.slice/crio-8f26cfecf26d45fa77e00dddafbac8cbe4e89eeda3c9d8fada01ce967c9ee01e WatchSource:0}: Error finding container 8f26cfecf26d45fa77e00dddafbac8cbe4e89eeda3c9d8fada01ce967c9ee01e: Status 404 returned error can't find the container with id 8f26cfecf26d45fa77e00dddafbac8cbe4e89eeda3c9d8fada01ce967c9ee01e Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.624540 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metrics-certs\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.624715 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-memberlist\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.631184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-metrics-certs\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.631451 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a740d8dc-1173-4dce-aee9-a27d619cbf9e-memberlist\") pod \"speaker-np2rx\" (UID: \"a740d8dc-1173-4dce-aee9-a27d619cbf9e\") " pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: I1123 06:54:57.781842 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-np2rx" Nov 23 06:54:57 crc kubenswrapper[4681]: W1123 06:54:57.805815 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda740d8dc_1173_4dce_aee9_a27d619cbf9e.slice/crio-a78f3576e509775bf88a6cfd8cebd74d9676195a03796af3b346897544572158 WatchSource:0}: Error finding container a78f3576e509775bf88a6cfd8cebd74d9676195a03796af3b346897544572158: Status 404 returned error can't find the container with id a78f3576e509775bf88a6cfd8cebd74d9676195a03796af3b346897544572158 Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.413602 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-np2rx" event={"ID":"a740d8dc-1173-4dce-aee9-a27d619cbf9e","Type":"ContainerStarted","Data":"aadb441be9a30435569e5737dc36135a984de40a1a701965b47202c15bdc0d8f"} Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.413892 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-np2rx" event={"ID":"a740d8dc-1173-4dce-aee9-a27d619cbf9e","Type":"ContainerStarted","Data":"0de36777fe8502fe9507a86e94f7896e91474a55510d7b70e3e7a166e0a3608a"} Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.413912 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-np2rx" event={"ID":"a740d8dc-1173-4dce-aee9-a27d619cbf9e","Type":"ContainerStarted","Data":"a78f3576e509775bf88a6cfd8cebd74d9676195a03796af3b346897544572158"} Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.414059 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-np2rx" Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.414873 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" event={"ID":"25aeb7e3-74cf-4e18-8922-1a6fcb370858","Type":"ContainerStarted","Data":"8f26cfecf26d45fa77e00dddafbac8cbe4e89eeda3c9d8fada01ce967c9ee01e"} Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.417399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8fdmk" event={"ID":"68453a26-a89c-4911-bceb-6daceb37c320","Type":"ContainerStarted","Data":"34f8eec2c49ce053139429afb0d86a4e275365fa5095831f2f74997731271331"} Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.417427 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8fdmk" event={"ID":"68453a26-a89c-4911-bceb-6daceb37c320","Type":"ContainerStarted","Data":"94807c0c9b1ac54e07be77ca46e22d1a3ff0d1bc2cc3c90c63012625a6ea5995"} Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.417440 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8fdmk" event={"ID":"68453a26-a89c-4911-bceb-6daceb37c320","Type":"ContainerStarted","Data":"9d5ffc7dddf1a33789f5e2a92723255c80800b949d319dbd25ea7bb2d08e7720"} Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.436743 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-np2rx" podStartSLOduration=2.436726583 podStartE2EDuration="2.436726583s" podCreationTimestamp="2025-11-23 06:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:54:58.434162939 +0000 UTC m=+635.503672176" watchObservedRunningTime="2025-11-23 06:54:58.436726583 +0000 UTC m=+635.506235819" Nov 23 06:54:58 crc kubenswrapper[4681]: I1123 06:54:58.458183 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-8fdmk" podStartSLOduration=2.458174408 podStartE2EDuration="2.458174408s" podCreationTimestamp="2025-11-23 06:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:54:58.454985827 +0000 UTC m=+635.524495064" watchObservedRunningTime="2025-11-23 06:54:58.458174408 +0000 UTC m=+635.527683645" Nov 23 06:54:59 crc kubenswrapper[4681]: I1123 06:54:59.430447 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:55:05 crc kubenswrapper[4681]: I1123 06:55:05.479994 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" event={"ID":"25aeb7e3-74cf-4e18-8922-1a6fcb370858","Type":"ContainerStarted","Data":"8c6ed660e595199b505e9d51b89d8ccb22255bb1a274d12efc4af82a2471117f"} Nov 23 06:55:05 crc kubenswrapper[4681]: I1123 06:55:05.480706 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:55:05 crc kubenswrapper[4681]: I1123 06:55:05.483385 4681 generic.go:334] "Generic (PLEG): container finished" podID="130f7428-4773-40f6-bb7c-1ea171ee3c1a" containerID="c2d4cbcbfc54778a8fdc7945845a4b2b4c7f5e306193f1856ceac3911aed99fb" exitCode=0 Nov 23 06:55:05 crc kubenswrapper[4681]: I1123 06:55:05.483440 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerDied","Data":"c2d4cbcbfc54778a8fdc7945845a4b2b4c7f5e306193f1856ceac3911aed99fb"} Nov 23 06:55:05 crc kubenswrapper[4681]: I1123 06:55:05.501289 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" podStartSLOduration=2.019965356 podStartE2EDuration="9.50125914s" podCreationTimestamp="2025-11-23 06:54:56 +0000 UTC" firstStartedPulling="2025-11-23 06:54:57.537499908 +0000 UTC m=+634.607009145" lastFinishedPulling="2025-11-23 06:55:05.018793691 +0000 UTC m=+642.088302929" observedRunningTime="2025-11-23 06:55:05.496518873 +0000 UTC m=+642.566028110" watchObservedRunningTime="2025-11-23 06:55:05.50125914 +0000 UTC m=+642.570768377" Nov 23 06:55:06 crc kubenswrapper[4681]: I1123 06:55:06.492998 4681 generic.go:334] "Generic (PLEG): container finished" podID="130f7428-4773-40f6-bb7c-1ea171ee3c1a" containerID="f09c8ccf40f983f823e273c55fa4746be89e304827877b2af1c099106e2a4fca" exitCode=0 Nov 23 06:55:06 crc kubenswrapper[4681]: I1123 06:55:06.493121 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerDied","Data":"f09c8ccf40f983f823e273c55fa4746be89e304827877b2af1c099106e2a4fca"} Nov 23 06:55:07 crc kubenswrapper[4681]: I1123 06:55:07.213150 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-8fdmk" Nov 23 06:55:07 crc kubenswrapper[4681]: I1123 06:55:07.502232 4681 generic.go:334] "Generic (PLEG): container finished" podID="130f7428-4773-40f6-bb7c-1ea171ee3c1a" containerID="4f931c4e8c04d5ea72f43fb0115e4819519fde66c9b0a8c323343a3a78f79564" exitCode=0 Nov 23 06:55:07 crc kubenswrapper[4681]: I1123 06:55:07.502281 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerDied","Data":"4f931c4e8c04d5ea72f43fb0115e4819519fde66c9b0a8c323343a3a78f79564"} Nov 23 06:55:07 crc kubenswrapper[4681]: I1123 06:55:07.787029 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-np2rx" Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.518550 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerStarted","Data":"aa4e0a40facfc7662819573858c9743a12359d87911ad23db6f943686c220f14"} Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.518978 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerStarted","Data":"866d40a24d65e44c117c03231856b45597803da937be3c4ba7f1858838ea0dcc"} Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.518993 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerStarted","Data":"c951007370c8bd9a9d508d662f2d550cd0fe9cf466df4f578a8adc550b7cd99d"} Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.519014 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.519025 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerStarted","Data":"1695f35d407b4425efd47138c128b8637fb5cac9f366768f206f1086fc013b59"} Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.519038 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerStarted","Data":"1ba4bfd28d863ac831aa13db8d665da81fdda38206ed676e86a523e76ce06fd2"} Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.519047 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gpxj5" event={"ID":"130f7428-4773-40f6-bb7c-1ea171ee3c1a","Type":"ContainerStarted","Data":"5117fb712d9d6916fcd205fa0554e4ae23ae10cb264b5858b623dbe10fa22fe5"} Nov 23 06:55:08 crc kubenswrapper[4681]: I1123 06:55:08.543761 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gpxj5" podStartSLOduration=4.801765692 podStartE2EDuration="12.543740931s" podCreationTimestamp="2025-11-23 06:54:56 +0000 UTC" firstStartedPulling="2025-11-23 06:54:57.265262726 +0000 UTC m=+634.334771963" lastFinishedPulling="2025-11-23 06:55:05.007237965 +0000 UTC m=+642.076747202" observedRunningTime="2025-11-23 06:55:08.541867288 +0000 UTC m=+645.611376525" watchObservedRunningTime="2025-11-23 06:55:08.543740931 +0000 UTC m=+645.613250168" Nov 23 06:55:09 crc kubenswrapper[4681]: I1123 06:55:09.886944 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2nffs"] Nov 23 06:55:09 crc kubenswrapper[4681]: I1123 06:55:09.888276 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2nffs" Nov 23 06:55:09 crc kubenswrapper[4681]: I1123 06:55:09.893556 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5xsb9" Nov 23 06:55:09 crc kubenswrapper[4681]: I1123 06:55:09.893867 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 23 06:55:09 crc kubenswrapper[4681]: I1123 06:55:09.893930 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 23 06:55:09 crc kubenswrapper[4681]: I1123 06:55:09.918024 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2nffs"] Nov 23 06:55:10 crc kubenswrapper[4681]: I1123 06:55:10.038047 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njghm\" (UniqueName: \"kubernetes.io/projected/55dca289-8aaa-45da-86d3-656b327cec12-kube-api-access-njghm\") pod \"openstack-operator-index-2nffs\" (UID: \"55dca289-8aaa-45da-86d3-656b327cec12\") " pod="openstack-operators/openstack-operator-index-2nffs" Nov 23 06:55:10 crc kubenswrapper[4681]: I1123 06:55:10.139901 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njghm\" (UniqueName: \"kubernetes.io/projected/55dca289-8aaa-45da-86d3-656b327cec12-kube-api-access-njghm\") pod \"openstack-operator-index-2nffs\" (UID: \"55dca289-8aaa-45da-86d3-656b327cec12\") " pod="openstack-operators/openstack-operator-index-2nffs" Nov 23 06:55:10 crc kubenswrapper[4681]: I1123 06:55:10.159773 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njghm\" (UniqueName: \"kubernetes.io/projected/55dca289-8aaa-45da-86d3-656b327cec12-kube-api-access-njghm\") pod \"openstack-operator-index-2nffs\" (UID: \"55dca289-8aaa-45da-86d3-656b327cec12\") " pod="openstack-operators/openstack-operator-index-2nffs" Nov 23 06:55:10 crc kubenswrapper[4681]: I1123 06:55:10.217650 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2nffs" Nov 23 06:55:10 crc kubenswrapper[4681]: I1123 06:55:10.605182 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2nffs"] Nov 23 06:55:11 crc kubenswrapper[4681]: I1123 06:55:11.546673 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2nffs" event={"ID":"55dca289-8aaa-45da-86d3-656b327cec12","Type":"ContainerStarted","Data":"b55bebd42ae47e643f75025d963f288ef55417aa315065cd1030ed0eedd9f2e6"} Nov 23 06:55:12 crc kubenswrapper[4681]: I1123 06:55:12.096312 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:55:12 crc kubenswrapper[4681]: I1123 06:55:12.129572 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:55:12 crc kubenswrapper[4681]: I1123 06:55:12.553149 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2nffs" event={"ID":"55dca289-8aaa-45da-86d3-656b327cec12","Type":"ContainerStarted","Data":"84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c"} Nov 23 06:55:12 crc kubenswrapper[4681]: I1123 06:55:12.568311 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2nffs" podStartSLOduration=2.627506374 podStartE2EDuration="3.568293202s" podCreationTimestamp="2025-11-23 06:55:09 +0000 UTC" firstStartedPulling="2025-11-23 06:55:10.611064043 +0000 UTC m=+647.680573280" lastFinishedPulling="2025-11-23 06:55:11.55185087 +0000 UTC m=+648.621360108" observedRunningTime="2025-11-23 06:55:12.565541065 +0000 UTC m=+649.635050301" watchObservedRunningTime="2025-11-23 06:55:12.568293202 +0000 UTC m=+649.637802440" Nov 23 06:55:13 crc kubenswrapper[4681]: I1123 06:55:13.262647 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2nffs"] Nov 23 06:55:13 crc kubenswrapper[4681]: I1123 06:55:13.868879 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wf6rf"] Nov 23 06:55:13 crc kubenswrapper[4681]: I1123 06:55:13.870253 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:13 crc kubenswrapper[4681]: I1123 06:55:13.879494 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wf6rf"] Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.002805 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpk9\" (UniqueName: \"kubernetes.io/projected/4a7da315-36a9-4287-92a3-3d34384e6805-kube-api-access-bqpk9\") pod \"openstack-operator-index-wf6rf\" (UID: \"4a7da315-36a9-4287-92a3-3d34384e6805\") " pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.104379 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpk9\" (UniqueName: \"kubernetes.io/projected/4a7da315-36a9-4287-92a3-3d34384e6805-kube-api-access-bqpk9\") pod \"openstack-operator-index-wf6rf\" (UID: \"4a7da315-36a9-4287-92a3-3d34384e6805\") " pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.122105 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpk9\" (UniqueName: \"kubernetes.io/projected/4a7da315-36a9-4287-92a3-3d34384e6805-kube-api-access-bqpk9\") pod \"openstack-operator-index-wf6rf\" (UID: \"4a7da315-36a9-4287-92a3-3d34384e6805\") " pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.183346 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.354540 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wf6rf"] Nov 23 06:55:14 crc kubenswrapper[4681]: W1123 06:55:14.372172 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a7da315_36a9_4287_92a3_3d34384e6805.slice/crio-c59c22a60dcc0652e14255e73b44df5ba68a69c33e11c95917039b9748de00a5 WatchSource:0}: Error finding container c59c22a60dcc0652e14255e73b44df5ba68a69c33e11c95917039b9748de00a5: Status 404 returned error can't find the container with id c59c22a60dcc0652e14255e73b44df5ba68a69c33e11c95917039b9748de00a5 Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.600414 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wf6rf" event={"ID":"4a7da315-36a9-4287-92a3-3d34384e6805","Type":"ContainerStarted","Data":"c59c22a60dcc0652e14255e73b44df5ba68a69c33e11c95917039b9748de00a5"} Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.600635 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2nffs" podUID="55dca289-8aaa-45da-86d3-656b327cec12" containerName="registry-server" containerID="cri-o://84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c" gracePeriod=2 Nov 23 06:55:14 crc kubenswrapper[4681]: I1123 06:55:14.867063 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2nffs" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.021117 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njghm\" (UniqueName: \"kubernetes.io/projected/55dca289-8aaa-45da-86d3-656b327cec12-kube-api-access-njghm\") pod \"55dca289-8aaa-45da-86d3-656b327cec12\" (UID: \"55dca289-8aaa-45da-86d3-656b327cec12\") " Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.026396 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55dca289-8aaa-45da-86d3-656b327cec12-kube-api-access-njghm" (OuterVolumeSpecName: "kube-api-access-njghm") pod "55dca289-8aaa-45da-86d3-656b327cec12" (UID: "55dca289-8aaa-45da-86d3-656b327cec12"). InnerVolumeSpecName "kube-api-access-njghm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.122721 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njghm\" (UniqueName: \"kubernetes.io/projected/55dca289-8aaa-45da-86d3-656b327cec12-kube-api-access-njghm\") on node \"crc\" DevicePath \"\"" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.609814 4681 generic.go:334] "Generic (PLEG): container finished" podID="55dca289-8aaa-45da-86d3-656b327cec12" containerID="84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c" exitCode=0 Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.609895 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2nffs" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.609943 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2nffs" event={"ID":"55dca289-8aaa-45da-86d3-656b327cec12","Type":"ContainerDied","Data":"84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c"} Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.610655 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2nffs" event={"ID":"55dca289-8aaa-45da-86d3-656b327cec12","Type":"ContainerDied","Data":"b55bebd42ae47e643f75025d963f288ef55417aa315065cd1030ed0eedd9f2e6"} Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.610715 4681 scope.go:117] "RemoveContainer" containerID="84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.616013 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wf6rf" event={"ID":"4a7da315-36a9-4287-92a3-3d34384e6805","Type":"ContainerStarted","Data":"1038c7401f98c062331313e5623ea532e59e75167d75da9e92e482ea45f09e86"} Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.630091 4681 scope.go:117] "RemoveContainer" containerID="84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.630482 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2nffs"] Nov 23 06:55:15 crc kubenswrapper[4681]: E1123 06:55:15.630700 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c\": container with ID starting with 84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c not found: ID does not exist" containerID="84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.630787 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c"} err="failed to get container status \"84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c\": rpc error: code = NotFound desc = could not find container \"84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c\": container with ID starting with 84b2a5d8ac284b772ce86021df808a766c0d2dcc6398ca29a4799cd13cd32a4c not found: ID does not exist" Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.638941 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2nffs"] Nov 23 06:55:15 crc kubenswrapper[4681]: I1123 06:55:15.646955 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wf6rf" podStartSLOduration=2.083286831 podStartE2EDuration="2.646928007s" podCreationTimestamp="2025-11-23 06:55:13 +0000 UTC" firstStartedPulling="2025-11-23 06:55:14.376923052 +0000 UTC m=+651.446432289" lastFinishedPulling="2025-11-23 06:55:14.940564227 +0000 UTC m=+652.010073465" observedRunningTime="2025-11-23 06:55:15.643427192 +0000 UTC m=+652.712936429" watchObservedRunningTime="2025-11-23 06:55:15.646928007 +0000 UTC m=+652.716437245" Nov 23 06:55:17 crc kubenswrapper[4681]: I1123 06:55:17.098993 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gpxj5" Nov 23 06:55:17 crc kubenswrapper[4681]: I1123 06:55:17.111329 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fngwt" Nov 23 06:55:17 crc kubenswrapper[4681]: I1123 06:55:17.259805 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55dca289-8aaa-45da-86d3-656b327cec12" path="/var/lib/kubelet/pods/55dca289-8aaa-45da-86d3-656b327cec12/volumes" Nov 23 06:55:24 crc kubenswrapper[4681]: I1123 06:55:24.183852 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:24 crc kubenswrapper[4681]: I1123 06:55:24.184597 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:24 crc kubenswrapper[4681]: I1123 06:55:24.209608 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:24 crc kubenswrapper[4681]: I1123 06:55:24.720121 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wf6rf" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.706147 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6"] Nov 23 06:55:25 crc kubenswrapper[4681]: E1123 06:55:25.706813 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dca289-8aaa-45da-86d3-656b327cec12" containerName="registry-server" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.706831 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dca289-8aaa-45da-86d3-656b327cec12" containerName="registry-server" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.706956 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dca289-8aaa-45da-86d3-656b327cec12" containerName="registry-server" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.707906 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.710176 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7pvp6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.728545 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6"] Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.771647 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.771699 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.771750 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vwk\" (UniqueName: \"kubernetes.io/projected/97560a71-57c3-40b5-bd78-54c2ce60a002-kube-api-access-82vwk\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.873535 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.873664 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.873807 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vwk\" (UniqueName: \"kubernetes.io/projected/97560a71-57c3-40b5-bd78-54c2ce60a002-kube-api-access-82vwk\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.874036 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.874127 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:25 crc kubenswrapper[4681]: I1123 06:55:25.892312 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vwk\" (UniqueName: \"kubernetes.io/projected/97560a71-57c3-40b5-bd78-54c2ce60a002-kube-api-access-82vwk\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:26 crc kubenswrapper[4681]: I1123 06:55:26.023584 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:26 crc kubenswrapper[4681]: I1123 06:55:26.420431 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6"] Nov 23 06:55:26 crc kubenswrapper[4681]: I1123 06:55:26.709635 4681 generic.go:334] "Generic (PLEG): container finished" podID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerID="0484fb1e5d87a4fe3b83ebb3e5406070fe988f9d50744cc9e5bfca8a0092e593" exitCode=0 Nov 23 06:55:26 crc kubenswrapper[4681]: I1123 06:55:26.709710 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" event={"ID":"97560a71-57c3-40b5-bd78-54c2ce60a002","Type":"ContainerDied","Data":"0484fb1e5d87a4fe3b83ebb3e5406070fe988f9d50744cc9e5bfca8a0092e593"} Nov 23 06:55:26 crc kubenswrapper[4681]: I1123 06:55:26.709757 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" event={"ID":"97560a71-57c3-40b5-bd78-54c2ce60a002","Type":"ContainerStarted","Data":"649b9cc8b39f859aa10449ff9dd845edeabfcdb1acfe5a9a5649df4e263c1f0c"} Nov 23 06:55:27 crc kubenswrapper[4681]: I1123 06:55:27.719686 4681 generic.go:334] "Generic (PLEG): container finished" podID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerID="f03598a3d6975e27513d44d374e408758321ca7fa573810305e6a03d5c837147" exitCode=0 Nov 23 06:55:27 crc kubenswrapper[4681]: I1123 06:55:27.719768 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" event={"ID":"97560a71-57c3-40b5-bd78-54c2ce60a002","Type":"ContainerDied","Data":"f03598a3d6975e27513d44d374e408758321ca7fa573810305e6a03d5c837147"} Nov 23 06:55:28 crc kubenswrapper[4681]: I1123 06:55:28.731544 4681 generic.go:334] "Generic (PLEG): container finished" podID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerID="6076411e8c5abc0f8d2f5b94d33df7ae270d793b55925a3a7031314317df85a9" exitCode=0 Nov 23 06:55:28 crc kubenswrapper[4681]: I1123 06:55:28.731595 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" event={"ID":"97560a71-57c3-40b5-bd78-54c2ce60a002","Type":"ContainerDied","Data":"6076411e8c5abc0f8d2f5b94d33df7ae270d793b55925a3a7031314317df85a9"} Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.109324 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.150018 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-util\") pod \"97560a71-57c3-40b5-bd78-54c2ce60a002\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.150085 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-bundle\") pod \"97560a71-57c3-40b5-bd78-54c2ce60a002\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.150124 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82vwk\" (UniqueName: \"kubernetes.io/projected/97560a71-57c3-40b5-bd78-54c2ce60a002-kube-api-access-82vwk\") pod \"97560a71-57c3-40b5-bd78-54c2ce60a002\" (UID: \"97560a71-57c3-40b5-bd78-54c2ce60a002\") " Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.151409 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-bundle" (OuterVolumeSpecName: "bundle") pod "97560a71-57c3-40b5-bd78-54c2ce60a002" (UID: "97560a71-57c3-40b5-bd78-54c2ce60a002"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.157208 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97560a71-57c3-40b5-bd78-54c2ce60a002-kube-api-access-82vwk" (OuterVolumeSpecName: "kube-api-access-82vwk") pod "97560a71-57c3-40b5-bd78-54c2ce60a002" (UID: "97560a71-57c3-40b5-bd78-54c2ce60a002"). InnerVolumeSpecName "kube-api-access-82vwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.162157 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-util" (OuterVolumeSpecName: "util") pod "97560a71-57c3-40b5-bd78-54c2ce60a002" (UID: "97560a71-57c3-40b5-bd78-54c2ce60a002"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.257629 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-util\") on node \"crc\" DevicePath \"\"" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.257992 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97560a71-57c3-40b5-bd78-54c2ce60a002-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.258131 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82vwk\" (UniqueName: \"kubernetes.io/projected/97560a71-57c3-40b5-bd78-54c2ce60a002-kube-api-access-82vwk\") on node \"crc\" DevicePath \"\"" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.749300 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" event={"ID":"97560a71-57c3-40b5-bd78-54c2ce60a002","Type":"ContainerDied","Data":"649b9cc8b39f859aa10449ff9dd845edeabfcdb1acfe5a9a5649df4e263c1f0c"} Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.749659 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="649b9cc8b39f859aa10449ff9dd845edeabfcdb1acfe5a9a5649df4e263c1f0c" Nov 23 06:55:30 crc kubenswrapper[4681]: I1123 06:55:30.749409 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b72876qsl6" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.509110 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b"] Nov 23 06:55:38 crc kubenswrapper[4681]: E1123 06:55:38.509837 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerName="extract" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.509852 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerName="extract" Nov 23 06:55:38 crc kubenswrapper[4681]: E1123 06:55:38.509864 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerName="pull" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.509870 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerName="pull" Nov 23 06:55:38 crc kubenswrapper[4681]: E1123 06:55:38.509880 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerName="util" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.509885 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerName="util" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.510003 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="97560a71-57c3-40b5-bd78-54c2ce60a002" containerName="extract" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.510649 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.515027 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-gzr6v" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.544259 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b"] Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.570789 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxl8t\" (UniqueName: \"kubernetes.io/projected/045d17a0-a4b9-489c-9b1e-9c30832357af-kube-api-access-cxl8t\") pod \"openstack-operator-controller-operator-8486c7f98b-rz99b\" (UID: \"045d17a0-a4b9-489c-9b1e-9c30832357af\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.671974 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxl8t\" (UniqueName: \"kubernetes.io/projected/045d17a0-a4b9-489c-9b1e-9c30832357af-kube-api-access-cxl8t\") pod \"openstack-operator-controller-operator-8486c7f98b-rz99b\" (UID: \"045d17a0-a4b9-489c-9b1e-9c30832357af\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.699029 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxl8t\" (UniqueName: \"kubernetes.io/projected/045d17a0-a4b9-489c-9b1e-9c30832357af-kube-api-access-cxl8t\") pod \"openstack-operator-controller-operator-8486c7f98b-rz99b\" (UID: \"045d17a0-a4b9-489c-9b1e-9c30832357af\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" Nov 23 06:55:38 crc kubenswrapper[4681]: I1123 06:55:38.825315 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" Nov 23 06:55:39 crc kubenswrapper[4681]: I1123 06:55:39.235573 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b"] Nov 23 06:55:39 crc kubenswrapper[4681]: W1123 06:55:39.240513 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod045d17a0_a4b9_489c_9b1e_9c30832357af.slice/crio-b16ca8794c2b2e5f8a3c64b7230b9b452c567e1bdabcdc473b6ffd55bc7cc1eb WatchSource:0}: Error finding container b16ca8794c2b2e5f8a3c64b7230b9b452c567e1bdabcdc473b6ffd55bc7cc1eb: Status 404 returned error can't find the container with id b16ca8794c2b2e5f8a3c64b7230b9b452c567e1bdabcdc473b6ffd55bc7cc1eb Nov 23 06:55:39 crc kubenswrapper[4681]: I1123 06:55:39.805576 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" event={"ID":"045d17a0-a4b9-489c-9b1e-9c30832357af","Type":"ContainerStarted","Data":"b16ca8794c2b2e5f8a3c64b7230b9b452c567e1bdabcdc473b6ffd55bc7cc1eb"} Nov 23 06:55:43 crc kubenswrapper[4681]: I1123 06:55:43.838088 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" event={"ID":"045d17a0-a4b9-489c-9b1e-9c30832357af","Type":"ContainerStarted","Data":"8f7ad06dc6949abfb06e19ce002510dd60f270e9ef93616fc2ca0eb9921a2925"} Nov 23 06:55:45 crc kubenswrapper[4681]: I1123 06:55:45.855253 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" event={"ID":"045d17a0-a4b9-489c-9b1e-9c30832357af","Type":"ContainerStarted","Data":"fd6479b08e6d7d01e4ff64d9416ebbec4fd3910a429f878b032c102fdd7fdb7a"} Nov 23 06:55:45 crc kubenswrapper[4681]: I1123 06:55:45.855965 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" Nov 23 06:55:45 crc kubenswrapper[4681]: I1123 06:55:45.885354 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" podStartSLOduration=1.402162382 podStartE2EDuration="7.88533982s" podCreationTimestamp="2025-11-23 06:55:38 +0000 UTC" firstStartedPulling="2025-11-23 06:55:39.242581784 +0000 UTC m=+676.312091021" lastFinishedPulling="2025-11-23 06:55:45.725759223 +0000 UTC m=+682.795268459" observedRunningTime="2025-11-23 06:55:45.88254462 +0000 UTC m=+682.952053858" watchObservedRunningTime="2025-11-23 06:55:45.88533982 +0000 UTC m=+682.954849056" Nov 23 06:55:48 crc kubenswrapper[4681]: I1123 06:55:48.828320 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-rz99b" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.371180 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.373178 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.375419 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-52zxm" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.384305 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.385751 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.389810 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cg8g7" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.408276 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.409501 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.411927 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk"] Nov 23 06:56:05 crc kubenswrapper[4681]: W1123 06:56:05.412630 4681 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9mrs4": failed to list *v1.Secret: secrets "designate-operator-controller-manager-dockercfg-9mrs4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Nov 23 06:56:05 crc kubenswrapper[4681]: E1123 06:56:05.412670 4681 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-9mrs4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"designate-operator-controller-manager-dockercfg-9mrs4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.434947 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.439920 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.441295 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.450526 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2bm7g" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.465326 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.467099 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.476177 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-chnjl" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.478942 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.489512 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.491148 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.492433 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.495127 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xp7dd" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.508207 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.523007 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.525536 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.526524 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.537225 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vb9v6" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.537406 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.539884 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzzr\" (UniqueName: \"kubernetes.io/projected/1ca66090-ac50-407c-baaa-f7cb3caa82f1-kube-api-access-bfzzr\") pod \"glance-operator-controller-manager-8667fbf6f6-lrqst\" (UID: \"1ca66090-ac50-407c-baaa-f7cb3caa82f1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.539954 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbvb\" (UniqueName: \"kubernetes.io/projected/a005c11d-acdf-48a8-8221-8fa148272da7-kube-api-access-rzbvb\") pod \"cinder-operator-controller-manager-6d8fd67bf7-d7xkk\" (UID: \"a005c11d-acdf-48a8-8221-8fa148272da7\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.539981 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shj9l\" (UniqueName: \"kubernetes.io/projected/af32e286-ab9d-4a19-98ae-3ad944d30031-kube-api-access-shj9l\") pod \"designate-operator-controller-manager-56dfb6b67f-46t6j\" (UID: \"af32e286-ab9d-4a19-98ae-3ad944d30031\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.540094 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6vnk\" (UniqueName: \"kubernetes.io/projected/34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a-kube-api-access-s6vnk\") pod \"barbican-operator-controller-manager-7768f8c84f-w5ggm\" (UID: \"34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.564510 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.567026 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.567958 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.570438 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bzczb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.579950 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.587887 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.606894 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.606991 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.610448 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.611433 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.614249 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-plxf5" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.615056 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-88rch" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.644238 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646330 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6vnk\" (UniqueName: \"kubernetes.io/projected/34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a-kube-api-access-s6vnk\") pod \"barbican-operator-controller-manager-7768f8c84f-w5ggm\" (UID: \"34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646384 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzzr\" (UniqueName: \"kubernetes.io/projected/1ca66090-ac50-407c-baaa-f7cb3caa82f1-kube-api-access-bfzzr\") pod \"glance-operator-controller-manager-8667fbf6f6-lrqst\" (UID: \"1ca66090-ac50-407c-baaa-f7cb3caa82f1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646505 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr5dg\" (UniqueName: \"kubernetes.io/projected/22041f9c-9d77-4e36-ad68-c08f5fb4dd1a-kube-api-access-rr5dg\") pod \"heat-operator-controller-manager-bf4c6585d-lrs4z\" (UID: \"22041f9c-9d77-4e36-ad68-c08f5fb4dd1a\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646625 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjvj\" (UniqueName: \"kubernetes.io/projected/23363330-2571-416c-b67a-2f6c40a32f25-kube-api-access-fjjvj\") pod \"infra-operator-controller-manager-769d9c7585-fwp7j\" (UID: \"23363330-2571-416c-b67a-2f6c40a32f25\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646705 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbvb\" (UniqueName: \"kubernetes.io/projected/a005c11d-acdf-48a8-8221-8fa148272da7-kube-api-access-rzbvb\") pod \"cinder-operator-controller-manager-6d8fd67bf7-d7xkk\" (UID: \"a005c11d-acdf-48a8-8221-8fa148272da7\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646772 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shj9l\" (UniqueName: \"kubernetes.io/projected/af32e286-ab9d-4a19-98ae-3ad944d30031-kube-api-access-shj9l\") pod \"designate-operator-controller-manager-56dfb6b67f-46t6j\" (UID: \"af32e286-ab9d-4a19-98ae-3ad944d30031\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646845 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23363330-2571-416c-b67a-2f6c40a32f25-cert\") pod \"infra-operator-controller-manager-769d9c7585-fwp7j\" (UID: \"23363330-2571-416c-b67a-2f6c40a32f25\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.646955 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8vc\" (UniqueName: \"kubernetes.io/projected/502c86ef-be84-47f4-af12-fc3cff24f444-kube-api-access-rp8vc\") pod \"horizon-operator-controller-manager-5d86b44686-4w9ff\" (UID: \"502c86ef-be84-47f4-af12-fc3cff24f444\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.680670 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.682350 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.685555 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6vnk\" (UniqueName: \"kubernetes.io/projected/34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a-kube-api-access-s6vnk\") pod \"barbican-operator-controller-manager-7768f8c84f-w5ggm\" (UID: \"34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.719678 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzzr\" (UniqueName: \"kubernetes.io/projected/1ca66090-ac50-407c-baaa-f7cb3caa82f1-kube-api-access-bfzzr\") pod \"glance-operator-controller-manager-8667fbf6f6-lrqst\" (UID: \"1ca66090-ac50-407c-baaa-f7cb3caa82f1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.719722 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbvb\" (UniqueName: \"kubernetes.io/projected/a005c11d-acdf-48a8-8221-8fa148272da7-kube-api-access-rzbvb\") pod \"cinder-operator-controller-manager-6d8fd67bf7-d7xkk\" (UID: \"a005c11d-acdf-48a8-8221-8fa148272da7\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.720838 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.710761 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shj9l\" (UniqueName: \"kubernetes.io/projected/af32e286-ab9d-4a19-98ae-3ad944d30031-kube-api-access-shj9l\") pod \"designate-operator-controller-manager-56dfb6b67f-46t6j\" (UID: \"af32e286-ab9d-4a19-98ae-3ad944d30031\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.721996 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-t8stv" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.727979 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.729370 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.729570 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.733844 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.736857 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dsf5v" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.741160 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.742213 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.754772 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756286 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756735 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr5dg\" (UniqueName: \"kubernetes.io/projected/22041f9c-9d77-4e36-ad68-c08f5fb4dd1a-kube-api-access-rr5dg\") pod \"heat-operator-controller-manager-bf4c6585d-lrs4z\" (UID: \"22041f9c-9d77-4e36-ad68-c08f5fb4dd1a\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756782 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjvj\" (UniqueName: \"kubernetes.io/projected/23363330-2571-416c-b67a-2f6c40a32f25-kube-api-access-fjjvj\") pod \"infra-operator-controller-manager-769d9c7585-fwp7j\" (UID: \"23363330-2571-416c-b67a-2f6c40a32f25\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756837 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23363330-2571-416c-b67a-2f6c40a32f25-cert\") pod \"infra-operator-controller-manager-769d9c7585-fwp7j\" (UID: \"23363330-2571-416c-b67a-2f6c40a32f25\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756879 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrwpb\" (UniqueName: \"kubernetes.io/projected/5f1ff057-2960-4375-b710-e7db2790d618-kube-api-access-nrwpb\") pod \"keystone-operator-controller-manager-7879fb76fd-8dqqb\" (UID: \"5f1ff057-2960-4375-b710-e7db2790d618\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756928 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp8vc\" (UniqueName: \"kubernetes.io/projected/502c86ef-be84-47f4-af12-fc3cff24f444-kube-api-access-rp8vc\") pod \"horizon-operator-controller-manager-5d86b44686-4w9ff\" (UID: \"502c86ef-be84-47f4-af12-fc3cff24f444\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756948 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zng2r\" (UniqueName: \"kubernetes.io/projected/f6c70cca-725e-4556-812a-98993453e495-kube-api-access-zng2r\") pod \"ironic-operator-controller-manager-5c75d7c94b-t2qsx\" (UID: \"f6c70cca-725e-4556-812a-98993453e495\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.756973 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rw6t\" (UniqueName: \"kubernetes.io/projected/def12e0d-381b-4c20-a31b-080c4a886b41-kube-api-access-4rw6t\") pod \"manila-operator-controller-manager-7bb88cb858-njlbw\" (UID: \"def12e0d-381b-4c20-a31b-080c4a886b41\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" Nov 23 06:56:05 crc kubenswrapper[4681]: E1123 06:56:05.759691 4681 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 23 06:56:05 crc kubenswrapper[4681]: E1123 06:56:05.759745 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23363330-2571-416c-b67a-2f6c40a32f25-cert podName:23363330-2571-416c-b67a-2f6c40a32f25 nodeName:}" failed. No retries permitted until 2025-11-23 06:56:06.259728601 +0000 UTC m=+703.329237839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23363330-2571-416c-b67a-2f6c40a32f25-cert") pod "infra-operator-controller-manager-769d9c7585-fwp7j" (UID: "23363330-2571-416c-b67a-2f6c40a32f25") : secret "infra-operator-webhook-server-cert" not found Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.761130 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.768670 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-twbm6" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.769252 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-q2ln8" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.769373 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.780754 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.794543 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.799062 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.810566 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr5dg\" (UniqueName: \"kubernetes.io/projected/22041f9c-9d77-4e36-ad68-c08f5fb4dd1a-kube-api-access-rr5dg\") pod \"heat-operator-controller-manager-bf4c6585d-lrs4z\" (UID: \"22041f9c-9d77-4e36-ad68-c08f5fb4dd1a\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.810621 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-98nsx" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.810781 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.813916 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjvj\" (UniqueName: \"kubernetes.io/projected/23363330-2571-416c-b67a-2f6c40a32f25-kube-api-access-fjjvj\") pod \"infra-operator-controller-manager-769d9c7585-fwp7j\" (UID: \"23363330-2571-416c-b67a-2f6c40a32f25\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.816998 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp8vc\" (UniqueName: \"kubernetes.io/projected/502c86ef-be84-47f4-af12-fc3cff24f444-kube-api-access-rp8vc\") pod \"horizon-operator-controller-manager-5d86b44686-4w9ff\" (UID: \"502c86ef-be84-47f4-af12-fc3cff24f444\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.828800 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.843115 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.843222 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.845099 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.858370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zg76\" (UniqueName: \"kubernetes.io/projected/1fda6923-a93f-48b4-bc98-72a16ec81d76-kube-api-access-2zg76\") pod \"octavia-operator-controller-manager-6fdc856c5d-tjgp6\" (UID: \"1fda6923-a93f-48b4-bc98-72a16ec81d76\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.858410 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwrf\" (UniqueName: \"kubernetes.io/projected/02d56c34-5751-4289-92d8-e1884b6783a1-kube-api-access-fvwrf\") pod \"nova-operator-controller-manager-86d796d84d-52p6s\" (UID: \"02d56c34-5751-4289-92d8-e1884b6783a1\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.858481 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsh6\" (UniqueName: \"kubernetes.io/projected/c73ba2c6-865e-4812-b05a-445b52643ca4-kube-api-access-6gsh6\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-h72sb\" (UID: \"c73ba2c6-865e-4812-b05a-445b52643ca4\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.858522 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkbc4\" (UniqueName: \"kubernetes.io/projected/92fd8223-0f35-4473-bcfc-9ca87c9b7a23-kube-api-access-gkbc4\") pod \"neutron-operator-controller-manager-66b7d6f598-6f42v\" (UID: \"92fd8223-0f35-4473-bcfc-9ca87c9b7a23\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.858569 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrwpb\" (UniqueName: \"kubernetes.io/projected/5f1ff057-2960-4375-b710-e7db2790d618-kube-api-access-nrwpb\") pod \"keystone-operator-controller-manager-7879fb76fd-8dqqb\" (UID: \"5f1ff057-2960-4375-b710-e7db2790d618\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.858602 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zng2r\" (UniqueName: \"kubernetes.io/projected/f6c70cca-725e-4556-812a-98993453e495-kube-api-access-zng2r\") pod \"ironic-operator-controller-manager-5c75d7c94b-t2qsx\" (UID: \"f6c70cca-725e-4556-812a-98993453e495\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.858625 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rw6t\" (UniqueName: \"kubernetes.io/projected/def12e0d-381b-4c20-a31b-080c4a886b41-kube-api-access-4rw6t\") pod \"manila-operator-controller-manager-7bb88cb858-njlbw\" (UID: \"def12e0d-381b-4c20-a31b-080c4a886b41\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.863915 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4h899" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.869586 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.877140 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.899407 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-j6kdb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.904992 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-552m2"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.906089 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.910937 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dthgz" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.919408 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rw6t\" (UniqueName: \"kubernetes.io/projected/def12e0d-381b-4c20-a31b-080c4a886b41-kube-api-access-4rw6t\") pod \"manila-operator-controller-manager-7bb88cb858-njlbw\" (UID: \"def12e0d-381b-4c20-a31b-080c4a886b41\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.920737 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.921750 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.930047 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rhfkq" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.932342 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrwpb\" (UniqueName: \"kubernetes.io/projected/5f1ff057-2960-4375-b710-e7db2790d618-kube-api-access-nrwpb\") pod \"keystone-operator-controller-manager-7879fb76fd-8dqqb\" (UID: \"5f1ff057-2960-4375-b710-e7db2790d618\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.934523 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.937233 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zng2r\" (UniqueName: \"kubernetes.io/projected/f6c70cca-725e-4556-812a-98993453e495-kube-api-access-zng2r\") pod \"ironic-operator-controller-manager-5c75d7c94b-t2qsx\" (UID: \"f6c70cca-725e-4556-812a-98993453e495\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.944515 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.951169 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.952266 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960001 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-r4c5k" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960705 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zg76\" (UniqueName: \"kubernetes.io/projected/1fda6923-a93f-48b4-bc98-72a16ec81d76-kube-api-access-2zg76\") pod \"octavia-operator-controller-manager-6fdc856c5d-tjgp6\" (UID: \"1fda6923-a93f-48b4-bc98-72a16ec81d76\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960734 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwrf\" (UniqueName: \"kubernetes.io/projected/02d56c34-5751-4289-92d8-e1884b6783a1-kube-api-access-fvwrf\") pod \"nova-operator-controller-manager-86d796d84d-52p6s\" (UID: \"02d56c34-5751-4289-92d8-e1884b6783a1\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960765 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmx4t\" (UniqueName: \"kubernetes.io/projected/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-kube-api-access-wmx4t\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr\" (UID: \"1aa2ff67-0121-4828-a7ab-96f69e7cb81c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960797 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsh6\" (UniqueName: \"kubernetes.io/projected/c73ba2c6-865e-4812-b05a-445b52643ca4-kube-api-access-6gsh6\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-h72sb\" (UID: \"c73ba2c6-865e-4812-b05a-445b52643ca4\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960825 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fv9b\" (UniqueName: \"kubernetes.io/projected/d5bb9b2e-1aa7-4970-847a-c36a687a9a46-kube-api-access-2fv9b\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-zmhkz\" (UID: \"d5bb9b2e-1aa7-4970-847a-c36a687a9a46\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960847 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkbc4\" (UniqueName: \"kubernetes.io/projected/92fd8223-0f35-4473-bcfc-9ca87c9b7a23-kube-api-access-gkbc4\") pod \"neutron-operator-controller-manager-66b7d6f598-6f42v\" (UID: \"92fd8223-0f35-4473-bcfc-9ca87c9b7a23\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.960890 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr\" (UID: \"1aa2ff67-0121-4828-a7ab-96f69e7cb81c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.970671 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv"] Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.987186 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsh6\" (UniqueName: \"kubernetes.io/projected/c73ba2c6-865e-4812-b05a-445b52643ca4-kube-api-access-6gsh6\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-h72sb\" (UID: \"c73ba2c6-865e-4812-b05a-445b52643ca4\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.993710 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwrf\" (UniqueName: \"kubernetes.io/projected/02d56c34-5751-4289-92d8-e1884b6783a1-kube-api-access-fvwrf\") pod \"nova-operator-controller-manager-86d796d84d-52p6s\" (UID: \"02d56c34-5751-4289-92d8-e1884b6783a1\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" Nov 23 06:56:05 crc kubenswrapper[4681]: I1123 06:56:05.994317 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zg76\" (UniqueName: \"kubernetes.io/projected/1fda6923-a93f-48b4-bc98-72a16ec81d76-kube-api-access-2zg76\") pod \"octavia-operator-controller-manager-6fdc856c5d-tjgp6\" (UID: \"1fda6923-a93f-48b4-bc98-72a16ec81d76\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.003487 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.014029 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-552m2"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.018993 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkbc4\" (UniqueName: \"kubernetes.io/projected/92fd8223-0f35-4473-bcfc-9ca87c9b7a23-kube-api-access-gkbc4\") pod \"neutron-operator-controller-manager-66b7d6f598-6f42v\" (UID: \"92fd8223-0f35-4473-bcfc-9ca87c9b7a23\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.028202 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.038330 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.048111 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.053881 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062552 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fv9b\" (UniqueName: \"kubernetes.io/projected/d5bb9b2e-1aa7-4970-847a-c36a687a9a46-kube-api-access-2fv9b\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-zmhkz\" (UID: \"d5bb9b2e-1aa7-4970-847a-c36a687a9a46\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062777 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwz2\" (UniqueName: \"kubernetes.io/projected/adbff020-4ba4-4712-855e-32addf53a9de-kube-api-access-vmwz2\") pod \"placement-operator-controller-manager-6dc664666c-552m2\" (UID: \"adbff020-4ba4-4712-855e-32addf53a9de\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062818 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlx4d\" (UniqueName: \"kubernetes.io/projected/e7284ad1-abd9-4775-8160-682b71f642fd-kube-api-access-zlx4d\") pod \"swift-operator-controller-manager-799cb6ffd6-2b9r5\" (UID: \"e7284ad1-abd9-4775-8160-682b71f642fd\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062841 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr\" (UID: \"1aa2ff67-0121-4828-a7ab-96f69e7cb81c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062887 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srxgj\" (UniqueName: \"kubernetes.io/projected/89fe9c5e-c007-47e1-aceb-b0e99e22c33b-kube-api-access-srxgj\") pod \"test-operator-controller-manager-8464cf66df-4xdkl\" (UID: \"89fe9c5e-c007-47e1-aceb-b0e99e22c33b\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062907 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmx4t\" (UniqueName: \"kubernetes.io/projected/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-kube-api-access-wmx4t\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr\" (UID: \"1aa2ff67-0121-4828-a7ab-96f69e7cb81c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062930 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvlb\" (UniqueName: \"kubernetes.io/projected/2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a-kube-api-access-9xvlb\") pod \"telemetry-operator-controller-manager-7798859c74-fb7rv\" (UID: \"2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.062751 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-tkr5z" Nov 23 06:56:06 crc kubenswrapper[4681]: E1123 06:56:06.063270 4681 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 06:56:06 crc kubenswrapper[4681]: E1123 06:56:06.063328 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-cert podName:1aa2ff67-0121-4828-a7ab-96f69e7cb81c nodeName:}" failed. No retries permitted until 2025-11-23 06:56:06.563310625 +0000 UTC m=+703.632819852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" (UID: "1aa2ff67-0121-4828-a7ab-96f69e7cb81c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.072874 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.085350 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.092012 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fv9b\" (UniqueName: \"kubernetes.io/projected/d5bb9b2e-1aa7-4970-847a-c36a687a9a46-kube-api-access-2fv9b\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-zmhkz\" (UID: \"d5bb9b2e-1aa7-4970-847a-c36a687a9a46\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.098630 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmx4t\" (UniqueName: \"kubernetes.io/projected/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-kube-api-access-wmx4t\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr\" (UID: \"1aa2ff67-0121-4828-a7ab-96f69e7cb81c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.120073 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.130810 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.149907 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.168997 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvlb\" (UniqueName: \"kubernetes.io/projected/2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a-kube-api-access-9xvlb\") pod \"telemetry-operator-controller-manager-7798859c74-fb7rv\" (UID: \"2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.169084 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmwz2\" (UniqueName: \"kubernetes.io/projected/adbff020-4ba4-4712-855e-32addf53a9de-kube-api-access-vmwz2\") pod \"placement-operator-controller-manager-6dc664666c-552m2\" (UID: \"adbff020-4ba4-4712-855e-32addf53a9de\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.169153 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7nbb\" (UniqueName: \"kubernetes.io/projected/4c5081a4-24b2-4510-af78-f4db91213b65-kube-api-access-m7nbb\") pod \"watcher-operator-controller-manager-7cd4fb6f79-b4wh6\" (UID: \"4c5081a4-24b2-4510-af78-f4db91213b65\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.169195 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlx4d\" (UniqueName: \"kubernetes.io/projected/e7284ad1-abd9-4775-8160-682b71f642fd-kube-api-access-zlx4d\") pod \"swift-operator-controller-manager-799cb6ffd6-2b9r5\" (UID: \"e7284ad1-abd9-4775-8160-682b71f642fd\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.169342 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srxgj\" (UniqueName: \"kubernetes.io/projected/89fe9c5e-c007-47e1-aceb-b0e99e22c33b-kube-api-access-srxgj\") pod \"test-operator-controller-manager-8464cf66df-4xdkl\" (UID: \"89fe9c5e-c007-47e1-aceb-b0e99e22c33b\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.170438 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.191043 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.226678 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.231864 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.253865 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.261101 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.262376 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srxgj\" (UniqueName: \"kubernetes.io/projected/89fe9c5e-c007-47e1-aceb-b0e99e22c33b-kube-api-access-srxgj\") pod \"test-operator-controller-manager-8464cf66df-4xdkl\" (UID: \"89fe9c5e-c007-47e1-aceb-b0e99e22c33b\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.262532 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.264106 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlx4d\" (UniqueName: \"kubernetes.io/projected/e7284ad1-abd9-4775-8160-682b71f642fd-kube-api-access-zlx4d\") pod \"swift-operator-controller-manager-799cb6ffd6-2b9r5\" (UID: \"e7284ad1-abd9-4775-8160-682b71f642fd\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.267069 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5fkdf" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.267249 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.272702 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23363330-2571-416c-b67a-2f6c40a32f25-cert\") pod \"infra-operator-controller-manager-769d9c7585-fwp7j\" (UID: \"23363330-2571-416c-b67a-2f6c40a32f25\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.272723 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.272732 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7nbb\" (UniqueName: \"kubernetes.io/projected/4c5081a4-24b2-4510-af78-f4db91213b65-kube-api-access-m7nbb\") pod \"watcher-operator-controller-manager-7cd4fb6f79-b4wh6\" (UID: \"4c5081a4-24b2-4510-af78-f4db91213b65\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.273523 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvlb\" (UniqueName: \"kubernetes.io/projected/2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a-kube-api-access-9xvlb\") pod \"telemetry-operator-controller-manager-7798859c74-fb7rv\" (UID: \"2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.273760 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.274787 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.275624 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23363330-2571-416c-b67a-2f6c40a32f25-cert\") pod \"infra-operator-controller-manager-769d9c7585-fwp7j\" (UID: \"23363330-2571-416c-b67a-2f6c40a32f25\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.276658 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmwz2\" (UniqueName: \"kubernetes.io/projected/adbff020-4ba4-4712-855e-32addf53a9de-kube-api-access-vmwz2\") pod \"placement-operator-controller-manager-6dc664666c-552m2\" (UID: \"adbff020-4ba4-4712-855e-32addf53a9de\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.279575 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xk9lx" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.288359 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.303578 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7nbb\" (UniqueName: \"kubernetes.io/projected/4c5081a4-24b2-4510-af78-f4db91213b65-kube-api-access-m7nbb\") pod \"watcher-operator-controller-manager-7cd4fb6f79-b4wh6\" (UID: \"4c5081a4-24b2-4510-af78-f4db91213b65\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.373903 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhg6m\" (UniqueName: \"kubernetes.io/projected/6d124d17-822f-4a02-830d-1274146f2ae0-kube-api-access-xhg6m\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.373982 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.374110 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bj7\" (UniqueName: \"kubernetes.io/projected/19931d5d-8219-4f8e-91a2-9b5815bef583-kube-api-access-86bj7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-fgshf\" (UID: \"19931d5d-8219-4f8e-91a2-9b5815bef583\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.453017 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.476590 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bj7\" (UniqueName: \"kubernetes.io/projected/19931d5d-8219-4f8e-91a2-9b5815bef583-kube-api-access-86bj7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-fgshf\" (UID: \"19931d5d-8219-4f8e-91a2-9b5815bef583\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.476665 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhg6m\" (UniqueName: \"kubernetes.io/projected/6d124d17-822f-4a02-830d-1274146f2ae0-kube-api-access-xhg6m\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.476697 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:06 crc kubenswrapper[4681]: E1123 06:56:06.476879 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 23 06:56:06 crc kubenswrapper[4681]: E1123 06:56:06.476939 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert podName:6d124d17-822f-4a02-830d-1274146f2ae0 nodeName:}" failed. No retries permitted until 2025-11-23 06:56:06.976923934 +0000 UTC m=+704.046433162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-7rq82" (UID: "6d124d17-822f-4a02-830d-1274146f2ae0") : secret "webhook-server-cert" not found Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.482140 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.492949 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.528021 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bj7\" (UniqueName: \"kubernetes.io/projected/19931d5d-8219-4f8e-91a2-9b5815bef583-kube-api-access-86bj7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-fgshf\" (UID: \"19931d5d-8219-4f8e-91a2-9b5815bef583\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.527725 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.532611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhg6m\" (UniqueName: \"kubernetes.io/projected/6d124d17-822f-4a02-830d-1274146f2ae0-kube-api-access-xhg6m\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.546870 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.552904 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.553630 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.578519 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr\" (UID: \"1aa2ff67-0121-4828-a7ab-96f69e7cb81c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.583505 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1aa2ff67-0121-4828-a7ab-96f69e7cb81c-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr\" (UID: \"1aa2ff67-0121-4828-a7ab-96f69e7cb81c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.629701 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.800845 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.819312 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.834825 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.921102 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z"] Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.951848 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9mrs4" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.965606 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" Nov 23 06:56:06 crc kubenswrapper[4681]: I1123 06:56:06.991382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:06 crc kubenswrapper[4681]: E1123 06:56:06.991649 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 23 06:56:06 crc kubenswrapper[4681]: E1123 06:56:06.991703 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert podName:6d124d17-822f-4a02-830d-1274146f2ae0 nodeName:}" failed. No retries permitted until 2025-11-23 06:56:07.99168766 +0000 UTC m=+705.061196897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-7rq82" (UID: "6d124d17-822f-4a02-830d-1274146f2ae0") : secret "webhook-server-cert" not found Nov 23 06:56:07 crc kubenswrapper[4681]: W1123 06:56:07.006590 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddef12e0d_381b_4c20_a31b_080c4a886b41.slice/crio-98e9aaade6adb099f8f0dc9f0744664788011503c0935fce2d3277b65bbc084c WatchSource:0}: Error finding container 98e9aaade6adb099f8f0dc9f0744664788011503c0935fce2d3277b65bbc084c: Status 404 returned error can't find the container with id 98e9aaade6adb099f8f0dc9f0744664788011503c0935fce2d3277b65bbc084c Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.026339 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.031501 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.049019 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v"] Nov 23 06:56:07 crc kubenswrapper[4681]: W1123 06:56:07.071575 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fda6923_a93f_48b4_bc98_72a16ec81d76.slice/crio-396366d1e2a106895bf6169c18b6e8f142d32fcfe6c928a47345b93cafeab55f WatchSource:0}: Error finding container 396366d1e2a106895bf6169c18b6e8f142d32fcfe6c928a47345b93cafeab55f: Status 404 returned error can't find the container with id 396366d1e2a106895bf6169c18b6e8f142d32fcfe6c928a47345b93cafeab55f Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.101242 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" event={"ID":"34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a","Type":"ContainerStarted","Data":"edd5bebd174137b9d73e1752171627dc06af1d423a362e0efb9a17af67cc9148"} Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.118869 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" event={"ID":"1ca66090-ac50-407c-baaa-f7cb3caa82f1","Type":"ContainerStarted","Data":"bfd0caa214a771bb2983355e76c6a7785bb743664d278c1c57d39c06a58ed84d"} Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.118938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" event={"ID":"def12e0d-381b-4c20-a31b-080c4a886b41","Type":"ContainerStarted","Data":"98e9aaade6adb099f8f0dc9f0744664788011503c0935fce2d3277b65bbc084c"} Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.142184 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.148782 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" event={"ID":"c73ba2c6-865e-4812-b05a-445b52643ca4","Type":"ContainerStarted","Data":"ec4f683b8f03f83dc7dde6e2d094952715121d1ce4f7aa540c6fe0a8b87024c1"} Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.156763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" event={"ID":"22041f9c-9d77-4e36-ad68-c08f5fb4dd1a","Type":"ContainerStarted","Data":"5b9fc9acf3092ab402f7ba3d95138d31b8eb8fc0d5b746328f5ab48cb78c7a5d"} Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.161018 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.189534 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.438933 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.460574 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s"] Nov 23 06:56:07 crc kubenswrapper[4681]: W1123 06:56:07.471115 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02d56c34_5751_4289_92d8_e1884b6783a1.slice/crio-7fe0e24f7bf5e70aa202dae3102da3ecdbb9b2b23de108ef4140c20fb87db1e4 WatchSource:0}: Error finding container 7fe0e24f7bf5e70aa202dae3102da3ecdbb9b2b23de108ef4140c20fb87db1e4: Status 404 returned error can't find the container with id 7fe0e24f7bf5e70aa202dae3102da3ecdbb9b2b23de108ef4140c20fb87db1e4 Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.648519 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-552m2"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.657367 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.664671 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.668941 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5"] Nov 23 06:56:07 crc kubenswrapper[4681]: W1123 06:56:07.697429 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadbff020_4ba4_4712_855e_32addf53a9de.slice/crio-8199e219fd7751e1cb6435aa2444d123df7af8d90dcf1a688067e1cd379025b9 WatchSource:0}: Error finding container 8199e219fd7751e1cb6435aa2444d123df7af8d90dcf1a688067e1cd379025b9: Status 404 returned error can't find the container with id 8199e219fd7751e1cb6435aa2444d123df7af8d90dcf1a688067e1cd379025b9 Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.697664 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zlx4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-799cb6ffd6-2b9r5_openstack-operators(e7284ad1-abd9-4775-8160-682b71f642fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.702397 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmwz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-6dc664666c-552m2_openstack-operators(adbff020-4ba4-4712-855e-32addf53a9de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.800588 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6"] Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.819101 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j"] Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.826207 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7nbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-b4wh6_openstack-operators(4c5081a4-24b2-4510-af78-f4db91213b65): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.829823 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf"] Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.839568 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-86bj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-fgshf_openstack-operators(19931d5d-8219-4f8e-91a2-9b5815bef583): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.841271 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" podUID="19931d5d-8219-4f8e-91a2-9b5815bef583" Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.850731 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr"] Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.859807 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:7dbadf7b98f2f305f9f1382f55a084c8ca404f4263f76b28e56bd0dc437e2192,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:0473ff9eec0da231e2d0a10bf1abbe1dfa1a0f95b8f619e3a07605386951449a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:c8101c77a82eae4407e41e1fd766dfc6e1b7f9ed1679e3efb6f91ff97a1557b2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:eb9743b21bbadca6f7cb9ac4fc46b5d58c51c674073c7e1121f4474a71304071,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:3d81f839b98c2e2a5bf0da79f2f9a92dff7d0a3c5a830b0e95c89dad8cf98a6a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:d19ac99249b47dd8ea16cd6aaa5756346aa8a2f119ee50819c15c5366efb417d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:4f1fa337760e82bfd67cdd142a97c121146dd7e621daac161940dd5e4ddb80dc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:3613b345d5baed98effd906f8b0242d863e14c97078ea473ef01fe1b0afc46f3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:9f9f367ed4c85efb16c3a74a4bb707ff0db271d7bc5abc70a71e984b55f43003,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:b73ad22b4955b06d584bce81742556d8c0c7828c495494f8ea7c99391c61b70f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:aa1d3aaf6b394621ed4089a98e0a82b763f467e8b5c5db772f9fdf99fc86e333,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:d6661053141b6df421288a7c9968a155ab82e478c1d75ab41f2cebe2f0ca02d2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:ce2d63258cb4e7d0d1c07234de6889c5434464190906798019311a1c7cf6387f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:0485ef9e5b4437f7cd2ba54034a87722ce4669ee86b3773c6b0c037ed8000e91,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:962c004551d0503779364b767b9bf0cecdf78dbba8809b2ca8b073f58e1f4e5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:0ebf4c465fb6cc7dad9e6cb2da0ff54874c9acbcb40d62234a629ec2c12cdd62,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:ff0c553ceeb2e0f44b010e37dc6d0db8a251797b88e56468b7cf7f05253e4232,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:624f553f073af7493d34828b074adc9981cce403edd8e71482c7307008479fd9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:e3874936a518c8560339db8f840fc5461885819f6050b5de8d3ab9199bea5094,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:1cea25f1d2a45affc80c46fb9d427749d3f06b61590ac6070a2910e3ec8a4e5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:e36d5b9a65194f12f7b01c6422ba3ed52a687fd1695fbb21f4986c67d9f9317f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:8b21bec527d54cd766e277889df6bcccd2baeaa946274606b986c0c3b7ca689f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:45aceca77f8fcf61127f0da650bdfdf11ede9b0944c78b63fab819d03283f96b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:709ac58998927dd61786821ae1e63343fd97ccf5763aac5edb4583eea9401d22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:867d4ef7c21f75e6030a685b5762ab4d84b671316ed6b98d75200076e93342cd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:2b90da93550b99d2fcfa95bd819f3363aa68346a416f8dc7baac3e9c5f487761,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:8cde52cef8795d1c91983b100d86541c7718160ec260fe0f97b96add4c2c8ee8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:835ebed082fe1c45bd799d1d5357595ce63efeb05ca876f26b08443facb9c164,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:011d682241db724bc40736c9b54d2ea450ea7e6be095b1ff5fa28c8007466775,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:2025da90cff8f563deb08bee71efe16d4078edc2a767b2e225cca5c77f1aa2f9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:ff46cd5e0e13d105c4629e78c2734a50835f06b6a1e31da9e0462981d10c4be3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:5b4fd0c2b76fa5539f74687b11c5882d77bd31352452322b37ff51fa18f12a61,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:5e03376bd895346dc8f627ca15ded942526ed8b5e92872f453ce272e694d18d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:36a0fb31978aee0ded2483de311631e64a644d0b0685b5b055f65ede7eb8e8a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:5f6045841aff0fde6f684a34cdf49f8dc7b2c3bcbdeab201f1058971e0c5f79e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:448f4e1b740c30936e340bd6e8534d78c83357bf373a4223950aa64d3484f007,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:b68e3615af8a0eb0ef6bf9ceeef59540a6f4a9a85f6078a3620be115c73a7db8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:7eae01cf60383e523c9cd94d158a9162120a7370829a1dad20fdea6b0fd660bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:28cc10501788081eb61b5a1af35546191a92741f4f109df54c74e2b19439d0f9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:9a616e37acfd120612f78043237a8541266ba34883833c9beb43f3da313661ad,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:6b1be6cd94a0942259bca5d5d2c30cc7de4a33276b61f8ae3940226772106256,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:02d2c22d15401574941fbe057095442dee0d6f7a0a9341de35d25e6a12a3fe4b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:fc3b3a36b74fd653946723c54b208072d52200635850b531e9d595a7aaea5a01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:7850ccbff320bf9a1c9c769c1c70777eb97117dd8cd5ae4435be9b4622cf807a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:397dac7e39cf40d14a986e6ec4a60fb698ca35c197d0db315b1318514cc6d1d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:1c95142a36276686e720f86423ee171dc9adcc1e89879f627545b7c906ccd9bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:e331a8fde6638e5ba154c4f0b38772a9a424f60656f2777245975fb1fa02f07d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:cd3cf7a34053e850b4d4f9f4ea4c74953a54a42fd18e47d7c01d44a88923e925,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:aee28476344fc0cc148fbe97daf9b1bfcedc22001550bba4bdc4e84be7b6989d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:cfa0b92c976603ee2a937d34013a238fcd8aa75f998e50642e33489f14124633,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:73c2f2d6eecf88acf4e45b133c8373d9bb006b530e0aff0b28f3b7420620a874,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:927b405cc04abe5ff716186e8d35e2dc5fad1c8430194659ee6617d74e4e055d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:6154d7cebd7c339afa5b86330262156171743aa5b79c2b78f9a2f378005ed8fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:e2db2f4af8d3d0be7868c6efef0189f3a2c74a8f96ae10e3f991cdf83feaef29,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:c773629df257726a6d3cacc24a6e4df0babcd7d37df04e6d14676a8da028b9c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:776211111e2e6493706dbc49a3ba44f31d1b947919313ed3a0f35810e304ec52,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:0a98e8f5c83522ca6c8e40c5e9561f6628d2d5e69f0e8a64279c541c989d3d8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:7cccf24ad0a152f90ca39893064f48a1656950ee8142685a5d482c71f0bdc9f5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:05450b48f6b5352b2686a26e933e8727748edae2ae9652d9164b7d7a1817c55a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:fc9c99eeef91523482bd8f92661b393287e1f2a24ad2ba9e33191f8de9af74cf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3e4ecc02b4b5e0860482a93599ba9ca598c5ce26c093c46e701f96fe51acb208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:2346037e064861c7892690d2e8b3e1eea1a26ce3c3a11fda0b41301965bc828c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:c26c3ff9cabe3593ceb10006e782bf9391ac14785768ce9eec4f938c2d3cf228,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:daa45220bb1c47922d0917aa8fe423bb82b03a01429f1c9e37635e701e352d71,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:a80a074e227d3238bb6f285788a9e886ae7a5909ccbc5c19c93c369bdfe5b3b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:58ac66ca1be01fe0157977bd79a26cde4d0de153edfaf4162367c924826b2ef4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:99a63770d80cc7c3afa1118b400972fb0e6bff5284a2eae781b12582ad79c29c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:9ee4d84529394afcd860f1a1186484560f02f08c15c37cac42a22473b7116d5f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:ea15fadda7b0439ec637edfaf6ea5dbf3e35fb3be012c7c5a31e722c90becb11,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmx4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr_openstack-operators(1aa2ff67-0121-4828-a7ab-96f69e7cb81c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 06:56:07 crc kubenswrapper[4681]: I1123 06:56:07.903680 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j"] Nov 23 06:56:07 crc kubenswrapper[4681]: W1123 06:56:07.924111 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf32e286_ab9d_4a19_98ae_3ad944d30031.slice/crio-894d7dc023307a89faea26609bcd45d7c39335d5a97b6051c5790d4218ef243a WatchSource:0}: Error finding container 894d7dc023307a89faea26609bcd45d7c39335d5a97b6051c5790d4218ef243a: Status 404 returned error can't find the container with id 894d7dc023307a89faea26609bcd45d7c39335d5a97b6051c5790d4218ef243a Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.932098 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shj9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-56dfb6b67f-46t6j_openstack-operators(af32e286-ab9d-4a19-98ae-3ad944d30031): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.972774 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" podUID="e7284ad1-abd9-4775-8160-682b71f642fd" Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.976879 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" podUID="adbff020-4ba4-4712-855e-32addf53a9de" Nov 23 06:56:07 crc kubenswrapper[4681]: E1123 06:56:07.982350 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" podUID="4c5081a4-24b2-4510-af78-f4db91213b65" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.023571 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.032191 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d124d17-822f-4a02-830d-1274146f2ae0-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-7rq82\" (UID: \"6d124d17-822f-4a02-830d-1274146f2ae0\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.062605 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" podUID="1aa2ff67-0121-4828-a7ab-96f69e7cb81c" Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.076577 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" podUID="af32e286-ab9d-4a19-98ae-3ad944d30031" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.102891 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.182914 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" event={"ID":"02d56c34-5751-4289-92d8-e1884b6783a1","Type":"ContainerStarted","Data":"7fe0e24f7bf5e70aa202dae3102da3ecdbb9b2b23de108ef4140c20fb87db1e4"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.184603 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" event={"ID":"23363330-2571-416c-b67a-2f6c40a32f25","Type":"ContainerStarted","Data":"941258bf499e6005b2f9eb8fff6b599e8185d419328fb61c0659df250e223ee1"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.187615 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" event={"ID":"d5bb9b2e-1aa7-4970-847a-c36a687a9a46","Type":"ContainerStarted","Data":"5cac6bd7d5613ede289ee203741880ce3b8b01e437dea5205e2a293e0042b2fe"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.189790 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" event={"ID":"92fd8223-0f35-4473-bcfc-9ca87c9b7a23","Type":"ContainerStarted","Data":"ab2a32d5b780b66092efacccbb870a2ca34125777797195384bf5b790552b94c"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.191938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" event={"ID":"a005c11d-acdf-48a8-8221-8fa148272da7","Type":"ContainerStarted","Data":"fb8b7ed00656e15e7e2ccf57f34f45b5e2781d8a01e9a6d7567a98b848a92fac"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.193485 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" event={"ID":"2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a","Type":"ContainerStarted","Data":"3e3c98bbf19753550d3e34629d075ef71497cd5c81c197ce0ce6be255993d658"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.199399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" event={"ID":"af32e286-ab9d-4a19-98ae-3ad944d30031","Type":"ContainerStarted","Data":"a815762ed3cbe8344bafaff405779ec5b95306b7e223980cd59c5285dfcb1256"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.199434 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" event={"ID":"af32e286-ab9d-4a19-98ae-3ad944d30031","Type":"ContainerStarted","Data":"894d7dc023307a89faea26609bcd45d7c39335d5a97b6051c5790d4218ef243a"} Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.209264 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" podUID="af32e286-ab9d-4a19-98ae-3ad944d30031" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.227667 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" event={"ID":"89fe9c5e-c007-47e1-aceb-b0e99e22c33b","Type":"ContainerStarted","Data":"c69b7240e6629998d65c6bb8ad9519e731c8abd63f022c021686d77b6362530f"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.237307 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" event={"ID":"f6c70cca-725e-4556-812a-98993453e495","Type":"ContainerStarted","Data":"a29c62eb8fb62093e9b14f2f00ea6e36ef6ef20f73f5317dbdd925b79b32d649"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.240252 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" event={"ID":"19931d5d-8219-4f8e-91a2-9b5815bef583","Type":"ContainerStarted","Data":"06eea5c40ba13d0e9bec75c1681a5db9528dcb25333b2f2aa4d52ea88f79dc1b"} Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.244233 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" podUID="19931d5d-8219-4f8e-91a2-9b5815bef583" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.247649 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" event={"ID":"1fda6923-a93f-48b4-bc98-72a16ec81d76","Type":"ContainerStarted","Data":"396366d1e2a106895bf6169c18b6e8f142d32fcfe6c928a47345b93cafeab55f"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.249508 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" event={"ID":"adbff020-4ba4-4712-855e-32addf53a9de","Type":"ContainerStarted","Data":"7b93eef21f654b6fc0d4b57f42fa7be49ced1a429a67b2fef8f3425b427eca89"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.249537 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" event={"ID":"adbff020-4ba4-4712-855e-32addf53a9de","Type":"ContainerStarted","Data":"8199e219fd7751e1cb6435aa2444d123df7af8d90dcf1a688067e1cd379025b9"} Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.250875 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" podUID="adbff020-4ba4-4712-855e-32addf53a9de" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.252318 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" event={"ID":"502c86ef-be84-47f4-af12-fc3cff24f444","Type":"ContainerStarted","Data":"f40f36b546e868fb82a0d5f565a225e038ab9528e703ba8f4fcb9286f68d1598"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.255645 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" event={"ID":"1aa2ff67-0121-4828-a7ab-96f69e7cb81c","Type":"ContainerStarted","Data":"4c0ee83502f6390879e44ecc4a456f51eb32eb6f26b553e46a4be341e5437d1e"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.255685 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" event={"ID":"1aa2ff67-0121-4828-a7ab-96f69e7cb81c","Type":"ContainerStarted","Data":"8a9a8381e2cc2f54cd4a13f39568d7bcaf076e6e33b68c54b111918ca6745728"} Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.264160 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" podUID="1aa2ff67-0121-4828-a7ab-96f69e7cb81c" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.268310 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" event={"ID":"e7284ad1-abd9-4775-8160-682b71f642fd","Type":"ContainerStarted","Data":"5d1e379d94373df903cd074b84620c689620024c9a9c4c42cf5a58acd9849925"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.268349 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" event={"ID":"e7284ad1-abd9-4775-8160-682b71f642fd","Type":"ContainerStarted","Data":"f205bf8ffdca938ec931d66e4f0ec2fb3c43979f5a08b9a6168ab7a483a57352"} Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.269545 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" podUID="e7284ad1-abd9-4775-8160-682b71f642fd" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.270840 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" event={"ID":"4c5081a4-24b2-4510-af78-f4db91213b65","Type":"ContainerStarted","Data":"b373b9b9ae848083b30b3d2b068fd1a19d99a64030ec9ce73643b3feac0939fc"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.270887 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" event={"ID":"4c5081a4-24b2-4510-af78-f4db91213b65","Type":"ContainerStarted","Data":"aed690b630c7997eb470f01d720f7f12eb42afeaf2ca109d23cb55911f8f4f14"} Nov 23 06:56:08 crc kubenswrapper[4681]: E1123 06:56:08.272595 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" podUID="4c5081a4-24b2-4510-af78-f4db91213b65" Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.297841 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" event={"ID":"5f1ff057-2960-4375-b710-e7db2790d618","Type":"ContainerStarted","Data":"975cde31fa74cd3d4756f6122a1f0600bb6dfa6cb6ac2641aa2543bab96f05ee"} Nov 23 06:56:08 crc kubenswrapper[4681]: I1123 06:56:08.683071 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82"] Nov 23 06:56:09 crc kubenswrapper[4681]: I1123 06:56:09.336366 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" event={"ID":"6d124d17-822f-4a02-830d-1274146f2ae0","Type":"ContainerStarted","Data":"0364bd3e6a50f1309ce8337cad89a80591ba8889609b5d9b2890e3cd183c3ac7"} Nov 23 06:56:09 crc kubenswrapper[4681]: E1123 06:56:09.341287 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" podUID="adbff020-4ba4-4712-855e-32addf53a9de" Nov 23 06:56:09 crc kubenswrapper[4681]: E1123 06:56:09.341656 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" podUID="e7284ad1-abd9-4775-8160-682b71f642fd" Nov 23 06:56:09 crc kubenswrapper[4681]: E1123 06:56:09.342202 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" podUID="1aa2ff67-0121-4828-a7ab-96f69e7cb81c" Nov 23 06:56:09 crc kubenswrapper[4681]: I1123 06:56:09.342334 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" event={"ID":"6d124d17-822f-4a02-830d-1274146f2ae0","Type":"ContainerStarted","Data":"107b354a344862e510dc6a6bb38cb6bf312274a3eb368e5147c7762f6a64253e"} Nov 23 06:56:09 crc kubenswrapper[4681]: I1123 06:56:09.342365 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" event={"ID":"6d124d17-822f-4a02-830d-1274146f2ae0","Type":"ContainerStarted","Data":"d8d6aa2a363728243230a9fc615b52217b01b258532ddf498de084651e0bfb08"} Nov 23 06:56:09 crc kubenswrapper[4681]: E1123 06:56:09.351590 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" podUID="4c5081a4-24b2-4510-af78-f4db91213b65" Nov 23 06:56:09 crc kubenswrapper[4681]: E1123 06:56:09.351806 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" podUID="19931d5d-8219-4f8e-91a2-9b5815bef583" Nov 23 06:56:09 crc kubenswrapper[4681]: E1123 06:56:09.351860 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" podUID="af32e286-ab9d-4a19-98ae-3ad944d30031" Nov 23 06:56:09 crc kubenswrapper[4681]: I1123 06:56:09.464926 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" podStartSLOduration=3.464908888 podStartE2EDuration="3.464908888s" podCreationTimestamp="2025-11-23 06:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:56:09.463566276 +0000 UTC m=+706.533075503" watchObservedRunningTime="2025-11-23 06:56:09.464908888 +0000 UTC m=+706.534418124" Nov 23 06:56:10 crc kubenswrapper[4681]: I1123 06:56:10.358029 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:12 crc kubenswrapper[4681]: I1123 06:56:12.296242 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:56:12 crc kubenswrapper[4681]: I1123 06:56:12.296678 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:56:18 crc kubenswrapper[4681]: I1123 06:56:18.109226 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-7rq82" Nov 23 06:56:21 crc kubenswrapper[4681]: E1123 06:56:21.083771 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894" Nov 23 06:56:21 crc kubenswrapper[4681]: E1123 06:56:21.084200 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjjvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-769d9c7585-fwp7j_openstack-operators(23363330-2571-416c-b67a-2f6c40a32f25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:56:21 crc kubenswrapper[4681]: E1123 06:56:21.510788 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 23 06:56:21 crc kubenswrapper[4681]: E1123 06:56:21.510983 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srxgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8464cf66df-4xdkl_openstack-operators(89fe9c5e-c007-47e1-aceb-b0e99e22c33b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:56:21 crc kubenswrapper[4681]: E1123 06:56:21.906750 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 23 06:56:21 crc kubenswrapper[4681]: E1123 06:56:21.906929 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zg76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6fdc856c5d-tjgp6_openstack-operators(1fda6923-a93f-48b4-bc98-72a16ec81d76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:56:22 crc kubenswrapper[4681]: E1123 06:56:22.755009 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 23 06:56:22 crc kubenswrapper[4681]: E1123 06:56:22.755221 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2fv9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5bdf4f7f7f-zmhkz_openstack-operators(d5bb9b2e-1aa7-4970-847a-c36a687a9a46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:56:23 crc kubenswrapper[4681]: E1123 06:56:23.368108 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" podUID="1fda6923-a93f-48b4-bc98-72a16ec81d76" Nov 23 06:56:23 crc kubenswrapper[4681]: I1123 06:56:23.452661 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" event={"ID":"1fda6923-a93f-48b4-bc98-72a16ec81d76","Type":"ContainerStarted","Data":"a1c89339e3f15415790a27a6806b8e9b61f42118eaaccdb7e1890263bd5e0698"} Nov 23 06:56:23 crc kubenswrapper[4681]: E1123 06:56:23.458799 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" podUID="1fda6923-a93f-48b4-bc98-72a16ec81d76" Nov 23 06:56:23 crc kubenswrapper[4681]: E1123 06:56:23.534818 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" podUID="89fe9c5e-c007-47e1-aceb-b0e99e22c33b" Nov 23 06:56:23 crc kubenswrapper[4681]: E1123 06:56:23.545886 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" podUID="d5bb9b2e-1aa7-4970-847a-c36a687a9a46" Nov 23 06:56:23 crc kubenswrapper[4681]: E1123 06:56:23.628737 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" podUID="23363330-2571-416c-b67a-2f6c40a32f25" Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.544343 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" event={"ID":"f6c70cca-725e-4556-812a-98993453e495","Type":"ContainerStarted","Data":"358b7fe94f65fbef56f49c749fe1bb9f20ba8fe5cb70beccef40d9750289c72d"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.563451 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" event={"ID":"5f1ff057-2960-4375-b710-e7db2790d618","Type":"ContainerStarted","Data":"55d0aac128ec1efec9668a74fcc08f9554e25da9e9be102c6af31177b4aba56e"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.580403 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" event={"ID":"22041f9c-9d77-4e36-ad68-c08f5fb4dd1a","Type":"ContainerStarted","Data":"3235085c5f27ac37ee09347834d0cd7a210dca67f2682c20e265289f96e98ad1"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.580445 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" event={"ID":"22041f9c-9d77-4e36-ad68-c08f5fb4dd1a","Type":"ContainerStarted","Data":"15630075607ef54dfe8a324bc92885127b7a898caefc57f496aba44b222a90ef"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.595354 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" event={"ID":"e7284ad1-abd9-4775-8160-682b71f642fd","Type":"ContainerStarted","Data":"456cb5c1cce709f03a8b3080a04b95dfadc0553f7323709718c47e5204ceba41"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.596180 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.610016 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" event={"ID":"2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a","Type":"ContainerStarted","Data":"4b8bf41d27a03fa1c9fefd9547285dd3d0089c19f7854740aee3ae68f1889f44"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.623776 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" event={"ID":"502c86ef-be84-47f4-af12-fc3cff24f444","Type":"ContainerStarted","Data":"5eed054a29251f02841e9b82cac8ee7f185dc86f79d299d61c54b350edb81ee5"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.640220 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" event={"ID":"89fe9c5e-c007-47e1-aceb-b0e99e22c33b","Type":"ContainerStarted","Data":"5887fdaf37b6bd33a7c91c240ddd3059242c2426dc87cf8aa3b8a046d4ae16f0"} Nov 23 06:56:24 crc kubenswrapper[4681]: E1123 06:56:24.642497 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" podUID="89fe9c5e-c007-47e1-aceb-b0e99e22c33b" Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.653744 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" event={"ID":"23363330-2571-416c-b67a-2f6c40a32f25","Type":"ContainerStarted","Data":"41090b31063d31a7b88266dc2638f2d4bc27c07f2f1fecd118a080de5c49685e"} Nov 23 06:56:24 crc kubenswrapper[4681]: E1123 06:56:24.657565 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" podUID="23363330-2571-416c-b67a-2f6c40a32f25" Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.659182 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" event={"ID":"c73ba2c6-865e-4812-b05a-445b52643ca4","Type":"ContainerStarted","Data":"25228a203bef0c01c288f9a54b03283feb47d334a0d8cf516240329e70977834"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.678863 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" event={"ID":"def12e0d-381b-4c20-a31b-080c4a886b41","Type":"ContainerStarted","Data":"e3adfb5ab9cc22950f55c3f8d5bb9227b1a3cb2d6676aa8ac01e7c02dd8c28cb"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.679028 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" podStartSLOduration=4.073696095 podStartE2EDuration="19.679012938s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.694645684 +0000 UTC m=+704.764154921" lastFinishedPulling="2025-11-23 06:56:23.299962527 +0000 UTC m=+720.369471764" observedRunningTime="2025-11-23 06:56:24.640382105 +0000 UTC m=+721.709891342" watchObservedRunningTime="2025-11-23 06:56:24.679012938 +0000 UTC m=+721.748522175" Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.687303 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" event={"ID":"1ca66090-ac50-407c-baaa-f7cb3caa82f1","Type":"ContainerStarted","Data":"934b411e20b7214bd3790f6c42efd5694fa16f99b2c0831aa74acaca6c7d38fc"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.702442 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" event={"ID":"d5bb9b2e-1aa7-4970-847a-c36a687a9a46","Type":"ContainerStarted","Data":"b0573c350110f6e4ff22065246fc65c80886efb8439050beee1c048e415896b3"} Nov 23 06:56:24 crc kubenswrapper[4681]: E1123 06:56:24.704574 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" podUID="d5bb9b2e-1aa7-4970-847a-c36a687a9a46" Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.743766 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" event={"ID":"92fd8223-0f35-4473-bcfc-9ca87c9b7a23","Type":"ContainerStarted","Data":"719ce5daeca01e39f74abe5f82cf4ceebaacdd688e8370a15a10a78d3c325511"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.767448 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" event={"ID":"34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a","Type":"ContainerStarted","Data":"ce650deceecf974e5522474b5eaec39c16d5e3023019f1697aa6683e14652555"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.780979 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" event={"ID":"a005c11d-acdf-48a8-8221-8fa148272da7","Type":"ContainerStarted","Data":"15f32812321c65133683ea33fda93867cbf00c8fc657ad1e0ed08915821c30ac"} Nov 23 06:56:24 crc kubenswrapper[4681]: I1123 06:56:24.814805 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" event={"ID":"02d56c34-5751-4289-92d8-e1884b6783a1","Type":"ContainerStarted","Data":"51d661a6ae63b1ae96c59b0d977a9b576f9ac22c46f907d5190e5b8258b3c82a"} Nov 23 06:56:24 crc kubenswrapper[4681]: E1123 06:56:24.911594 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" podUID="1fda6923-a93f-48b4-bc98-72a16ec81d76" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.836555 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" event={"ID":"502c86ef-be84-47f4-af12-fc3cff24f444","Type":"ContainerStarted","Data":"d91f83b89c90286ee5aaa0e505acb5eb9e32a97b1a2dd13e1ba09418250cbe7f"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.837476 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.839713 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" event={"ID":"34ce0225-164a-45e4-b5c5-a8c5e9aa5c1a","Type":"ContainerStarted","Data":"bdc72a2d7e82d9687495779932cf51d84e11f614c0c201cd937e2657a1a81334"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.840408 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.844837 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" event={"ID":"a005c11d-acdf-48a8-8221-8fa148272da7","Type":"ContainerStarted","Data":"ab2b4afedb28dd60425e733500eabc65208a6ff7ddeeb5cb12cb255e374d7a0e"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.845281 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.847741 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" event={"ID":"2fcf9694-3ad5-4da3-8ce1-330ef77b9b5a","Type":"ContainerStarted","Data":"02d2546b0f797dee9d905ecb56e6029ffd1fe3e28c59c781ea9343b8604a7a01"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.848174 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.852666 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" event={"ID":"02d56c34-5751-4289-92d8-e1884b6783a1","Type":"ContainerStarted","Data":"e0e061b555d271d4fd78fbfb57fe6e7242efb36ffc538286810af445ab6b0c8a"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.853094 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.859672 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" podStartSLOduration=5.152518973 podStartE2EDuration="20.859646125s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.449397149 +0000 UTC m=+704.518906386" lastFinishedPulling="2025-11-23 06:56:23.156524301 +0000 UTC m=+720.226033538" observedRunningTime="2025-11-23 06:56:25.858203837 +0000 UTC m=+722.927713075" watchObservedRunningTime="2025-11-23 06:56:25.859646125 +0000 UTC m=+722.929155362" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.864507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" event={"ID":"1ca66090-ac50-407c-baaa-f7cb3caa82f1","Type":"ContainerStarted","Data":"72941b57cfd38a4fbf6ef785a09a44652a0faf18cb164b7c4a93330b9dfb4b86"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.864652 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.875381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" event={"ID":"def12e0d-381b-4c20-a31b-080c4a886b41","Type":"ContainerStarted","Data":"a4da8c28012549aa818bdcd209862657d3453f8d5d6e2f5448c2931c0eff0ea5"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.875857 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.884416 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" event={"ID":"5f1ff057-2960-4375-b710-e7db2790d618","Type":"ContainerStarted","Data":"61e5917701caec7344409531001016f0052244da339989b92bfd5769d6d95311"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.885963 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.888955 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" event={"ID":"f6c70cca-725e-4556-812a-98993453e495","Type":"ContainerStarted","Data":"075d0da7736c1b633631ffd1a23fb9032c8a4b17724f03cf5d284b009508e5f9"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.889425 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.892778 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" podStartSLOduration=4.316993315 podStartE2EDuration="20.892760503s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:06.580266017 +0000 UTC m=+703.649775254" lastFinishedPulling="2025-11-23 06:56:23.156033205 +0000 UTC m=+720.225542442" observedRunningTime="2025-11-23 06:56:25.877383227 +0000 UTC m=+722.946892464" watchObservedRunningTime="2025-11-23 06:56:25.892760503 +0000 UTC m=+722.962269741" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.901387 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" podStartSLOduration=5.430863378 podStartE2EDuration="20.901376569s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.68536315 +0000 UTC m=+704.754872387" lastFinishedPulling="2025-11-23 06:56:23.15587634 +0000 UTC m=+720.225385578" observedRunningTime="2025-11-23 06:56:25.898945438 +0000 UTC m=+722.968454675" watchObservedRunningTime="2025-11-23 06:56:25.901376569 +0000 UTC m=+722.970885806" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.905524 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" event={"ID":"c73ba2c6-865e-4812-b05a-445b52643ca4","Type":"ContainerStarted","Data":"c921ccee9521a3cc523c80c0bce12ff692d055ace8a8c04f1d35c8d990c465e6"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.905787 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.909440 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" event={"ID":"92fd8223-0f35-4473-bcfc-9ca87c9b7a23","Type":"ContainerStarted","Data":"3d6e3fb9f9e12b25cbef2b44d8a554446f1e57d2cc04ad5a945034154ca1edff"} Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.912749 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.912844 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" Nov 23 06:56:25 crc kubenswrapper[4681]: E1123 06:56:25.913217 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" podUID="d5bb9b2e-1aa7-4970-847a-c36a687a9a46" Nov 23 06:56:25 crc kubenswrapper[4681]: E1123 06:56:25.913752 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" podUID="23363330-2571-416c-b67a-2f6c40a32f25" Nov 23 06:56:25 crc kubenswrapper[4681]: E1123 06:56:25.913824 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" podUID="89fe9c5e-c007-47e1-aceb-b0e99e22c33b" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.934752 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" podStartSLOduration=4.884241331 podStartE2EDuration="20.934717855s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.101357507 +0000 UTC m=+704.170866745" lastFinishedPulling="2025-11-23 06:56:23.151834033 +0000 UTC m=+720.221343269" observedRunningTime="2025-11-23 06:56:25.919172182 +0000 UTC m=+722.988681408" watchObservedRunningTime="2025-11-23 06:56:25.934717855 +0000 UTC m=+723.004227092" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.947649 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" podStartSLOduration=5.268617221 podStartE2EDuration="20.947621811s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.473797157 +0000 UTC m=+704.543306393" lastFinishedPulling="2025-11-23 06:56:23.152801746 +0000 UTC m=+720.222310983" observedRunningTime="2025-11-23 06:56:25.941001596 +0000 UTC m=+723.010510832" watchObservedRunningTime="2025-11-23 06:56:25.947621811 +0000 UTC m=+723.017131048" Nov 23 06:56:25 crc kubenswrapper[4681]: I1123 06:56:25.970567 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" podStartSLOduration=5.011417697 podStartE2EDuration="20.970540377s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.194396318 +0000 UTC m=+704.263905555" lastFinishedPulling="2025-11-23 06:56:23.153518998 +0000 UTC m=+720.223028235" observedRunningTime="2025-11-23 06:56:25.959308422 +0000 UTC m=+723.028817660" watchObservedRunningTime="2025-11-23 06:56:25.970540377 +0000 UTC m=+723.040049614" Nov 23 06:56:26 crc kubenswrapper[4681]: I1123 06:56:26.019694 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" podStartSLOduration=5.127292068 podStartE2EDuration="21.019671817s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.261809922 +0000 UTC m=+704.331319159" lastFinishedPulling="2025-11-23 06:56:23.154189672 +0000 UTC m=+720.223698908" observedRunningTime="2025-11-23 06:56:25.991966691 +0000 UTC m=+723.061475928" watchObservedRunningTime="2025-11-23 06:56:26.019671817 +0000 UTC m=+723.089181054" Nov 23 06:56:26 crc kubenswrapper[4681]: I1123 06:56:26.019873 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" podStartSLOduration=4.9034005050000005 podStartE2EDuration="21.019867135s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.039371087 +0000 UTC m=+704.108880324" lastFinishedPulling="2025-11-23 06:56:23.155837717 +0000 UTC m=+720.225346954" observedRunningTime="2025-11-23 06:56:26.003740527 +0000 UTC m=+723.073249764" watchObservedRunningTime="2025-11-23 06:56:26.019867135 +0000 UTC m=+723.089376372" Nov 23 06:56:26 crc kubenswrapper[4681]: I1123 06:56:26.030572 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" podStartSLOduration=4.975384934 podStartE2EDuration="21.030548382s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.114262936 +0000 UTC m=+704.183772162" lastFinishedPulling="2025-11-23 06:56:23.169426372 +0000 UTC m=+720.238935610" observedRunningTime="2025-11-23 06:56:26.020300983 +0000 UTC m=+723.089810240" watchObservedRunningTime="2025-11-23 06:56:26.030548382 +0000 UTC m=+723.100057619" Nov 23 06:56:26 crc kubenswrapper[4681]: I1123 06:56:26.047057 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" podStartSLOduration=4.923535709 podStartE2EDuration="21.047019509s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.02760584 +0000 UTC m=+704.097115067" lastFinishedPulling="2025-11-23 06:56:23.151089631 +0000 UTC m=+720.220598867" observedRunningTime="2025-11-23 06:56:26.039048108 +0000 UTC m=+723.108557345" watchObservedRunningTime="2025-11-23 06:56:26.047019509 +0000 UTC m=+723.116528746" Nov 23 06:56:26 crc kubenswrapper[4681]: I1123 06:56:26.068349 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" podStartSLOduration=4.570307754 podStartE2EDuration="21.068316259s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:06.654787479 +0000 UTC m=+703.724296717" lastFinishedPulling="2025-11-23 06:56:23.152795985 +0000 UTC m=+720.222305222" observedRunningTime="2025-11-23 06:56:26.059833115 +0000 UTC m=+723.129342351" watchObservedRunningTime="2025-11-23 06:56:26.068316259 +0000 UTC m=+723.137825496" Nov 23 06:56:26 crc kubenswrapper[4681]: I1123 06:56:26.085410 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" podStartSLOduration=4.969521495 podStartE2EDuration="21.08539431s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.040126165 +0000 UTC m=+704.109635403" lastFinishedPulling="2025-11-23 06:56:23.155998991 +0000 UTC m=+720.225508218" observedRunningTime="2025-11-23 06:56:26.07543855 +0000 UTC m=+723.144947787" watchObservedRunningTime="2025-11-23 06:56:26.08539431 +0000 UTC m=+723.154903537" Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.969192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" event={"ID":"19931d5d-8219-4f8e-91a2-9b5815bef583","Type":"ContainerStarted","Data":"ff74318c74ce476be928075af4f63e6abd181341ece135262a7a71844de3526f"} Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.973043 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" event={"ID":"af32e286-ab9d-4a19-98ae-3ad944d30031","Type":"ContainerStarted","Data":"ac551a7538f7fa6a0d5b5d4edbe141940dfed361f01e5d49175c501701163133"} Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.973395 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.976563 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" event={"ID":"4c5081a4-24b2-4510-af78-f4db91213b65","Type":"ContainerStarted","Data":"123c97ece7adc101ae099bb1e46dab87c84b0ff00bde2ca7b01b6906b2ca8990"} Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.977033 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.978552 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" event={"ID":"adbff020-4ba4-4712-855e-32addf53a9de","Type":"ContainerStarted","Data":"fed9971c8fd3ae01eb9c02e374f671a54deb9767c0ef5a17bf1322e92584c130"} Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.978949 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.980731 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" event={"ID":"1aa2ff67-0121-4828-a7ab-96f69e7cb81c","Type":"ContainerStarted","Data":"78f28a5af596ce85394ccba018910a9d746153579c1e0e61751d2401db644f9d"} Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.981098 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:31 crc kubenswrapper[4681]: I1123 06:56:31.985987 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fgshf" podStartSLOduration=2.468671202 podStartE2EDuration="25.985974211s" podCreationTimestamp="2025-11-23 06:56:06 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.839376451 +0000 UTC m=+704.908885678" lastFinishedPulling="2025-11-23 06:56:31.35667945 +0000 UTC m=+728.426188687" observedRunningTime="2025-11-23 06:56:31.984536803 +0000 UTC m=+729.054046041" watchObservedRunningTime="2025-11-23 06:56:31.985974211 +0000 UTC m=+729.055483449" Nov 23 06:56:32 crc kubenswrapper[4681]: I1123 06:56:32.001780 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" podStartSLOduration=3.582454576 podStartE2EDuration="27.001770076s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.931897951 +0000 UTC m=+705.001407188" lastFinishedPulling="2025-11-23 06:56:31.351213451 +0000 UTC m=+728.420722688" observedRunningTime="2025-11-23 06:56:32.000155123 +0000 UTC m=+729.069664370" watchObservedRunningTime="2025-11-23 06:56:32.001770076 +0000 UTC m=+729.071279314" Nov 23 06:56:32 crc kubenswrapper[4681]: I1123 06:56:32.025504 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" podStartSLOduration=3.502738708 podStartE2EDuration="27.025479001s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.826047126 +0000 UTC m=+704.895556363" lastFinishedPulling="2025-11-23 06:56:31.348787429 +0000 UTC m=+728.418296656" observedRunningTime="2025-11-23 06:56:32.020835381 +0000 UTC m=+729.090344618" watchObservedRunningTime="2025-11-23 06:56:32.025479001 +0000 UTC m=+729.094988239" Nov 23 06:56:32 crc kubenswrapper[4681]: I1123 06:56:32.051637 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" podStartSLOduration=3.554725333 podStartE2EDuration="27.051623276s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.859281796 +0000 UTC m=+704.928791033" lastFinishedPulling="2025-11-23 06:56:31.356179739 +0000 UTC m=+728.425688976" observedRunningTime="2025-11-23 06:56:32.045767733 +0000 UTC m=+729.115276969" watchObservedRunningTime="2025-11-23 06:56:32.051623276 +0000 UTC m=+729.121132513" Nov 23 06:56:32 crc kubenswrapper[4681]: I1123 06:56:32.064147 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" podStartSLOduration=3.395703143 podStartE2EDuration="27.064126186s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.702228536 +0000 UTC m=+704.771737773" lastFinishedPulling="2025-11-23 06:56:31.370651589 +0000 UTC m=+728.440160816" observedRunningTime="2025-11-23 06:56:32.058240224 +0000 UTC m=+729.127749461" watchObservedRunningTime="2025-11-23 06:56:32.064126186 +0000 UTC m=+729.133635423" Nov 23 06:56:35 crc kubenswrapper[4681]: I1123 06:56:35.725544 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-w5ggm" Nov 23 06:56:35 crc kubenswrapper[4681]: I1123 06:56:35.778150 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-lrqst" Nov 23 06:56:35 crc kubenswrapper[4681]: I1123 06:56:35.949412 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-njlbw" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.012586 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-d7xkk" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.050591 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-h72sb" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.087701 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-lrs4z" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.125981 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4w9ff" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.132667 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6f42v" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.157254 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-52p6s" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.198176 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-t2qsx" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.234607 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-8dqqb" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.486364 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-b4wh6" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.531625 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-2b9r5" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.554902 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-552m2" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.557184 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-fb7rv" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.807469 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44zrbfr" Nov 23 06:56:36 crc kubenswrapper[4681]: I1123 06:56:36.969422 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-46t6j" Nov 23 06:56:39 crc kubenswrapper[4681]: I1123 06:56:39.035626 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" event={"ID":"1fda6923-a93f-48b4-bc98-72a16ec81d76","Type":"ContainerStarted","Data":"45d4f94cb7c9e7775057984db75f68a52a8ffc5cda74b7a0ded7e574b08a84ed"} Nov 23 06:56:39 crc kubenswrapper[4681]: I1123 06:56:39.036230 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" Nov 23 06:56:39 crc kubenswrapper[4681]: I1123 06:56:39.037833 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" event={"ID":"89fe9c5e-c007-47e1-aceb-b0e99e22c33b","Type":"ContainerStarted","Data":"402be21f2cb2e2aac3f3e1431a5fff4c38eb154879ad1bf4d722765ef0339e8b"} Nov 23 06:56:39 crc kubenswrapper[4681]: I1123 06:56:39.038605 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" Nov 23 06:56:39 crc kubenswrapper[4681]: I1123 06:56:39.060113 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" podStartSLOduration=2.355247798 podStartE2EDuration="34.060086661s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.101706422 +0000 UTC m=+704.171215659" lastFinishedPulling="2025-11-23 06:56:38.806545295 +0000 UTC m=+735.876054522" observedRunningTime="2025-11-23 06:56:39.051452922 +0000 UTC m=+736.120962159" watchObservedRunningTime="2025-11-23 06:56:39.060086661 +0000 UTC m=+736.129595899" Nov 23 06:56:39 crc kubenswrapper[4681]: I1123 06:56:39.084141 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" podStartSLOduration=2.537218841 podStartE2EDuration="34.084117643s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.194318142 +0000 UTC m=+704.263827379" lastFinishedPulling="2025-11-23 06:56:38.741216944 +0000 UTC m=+735.810726181" observedRunningTime="2025-11-23 06:56:39.07894094 +0000 UTC m=+736.148450176" watchObservedRunningTime="2025-11-23 06:56:39.084117643 +0000 UTC m=+736.153626880" Nov 23 06:56:41 crc kubenswrapper[4681]: I1123 06:56:41.053332 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" event={"ID":"23363330-2571-416c-b67a-2f6c40a32f25","Type":"ContainerStarted","Data":"c3ba00d69df46f43e01a4bf55b2a861a84cb35259f280fb00a22966676248fed"} Nov 23 06:56:41 crc kubenswrapper[4681]: I1123 06:56:41.054053 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:41 crc kubenswrapper[4681]: I1123 06:56:41.056252 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" event={"ID":"d5bb9b2e-1aa7-4970-847a-c36a687a9a46","Type":"ContainerStarted","Data":"089d2bb798fb889a5658ed347f405dbf1802a839339f1702000eeeb5132524d8"} Nov 23 06:56:41 crc kubenswrapper[4681]: I1123 06:56:41.056591 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" Nov 23 06:56:41 crc kubenswrapper[4681]: I1123 06:56:41.076608 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" podStartSLOduration=3.157251232 podStartE2EDuration="36.076589716s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.8282001 +0000 UTC m=+704.897709337" lastFinishedPulling="2025-11-23 06:56:40.747538584 +0000 UTC m=+737.817047821" observedRunningTime="2025-11-23 06:56:41.068778427 +0000 UTC m=+738.138287664" watchObservedRunningTime="2025-11-23 06:56:41.076589716 +0000 UTC m=+738.146098953" Nov 23 06:56:41 crc kubenswrapper[4681]: I1123 06:56:41.085581 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" podStartSLOduration=3.005026528 podStartE2EDuration="36.085567042s" podCreationTimestamp="2025-11-23 06:56:05 +0000 UTC" firstStartedPulling="2025-11-23 06:56:07.687666025 +0000 UTC m=+704.757175262" lastFinishedPulling="2025-11-23 06:56:40.768206539 +0000 UTC m=+737.837715776" observedRunningTime="2025-11-23 06:56:41.080485787 +0000 UTC m=+738.149995024" watchObservedRunningTime="2025-11-23 06:56:41.085567042 +0000 UTC m=+738.155076279" Nov 23 06:56:42 crc kubenswrapper[4681]: I1123 06:56:42.295767 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:56:42 crc kubenswrapper[4681]: I1123 06:56:42.295841 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:56:45 crc kubenswrapper[4681]: I1123 06:56:45.526113 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nk54m"] Nov 23 06:56:45 crc kubenswrapper[4681]: I1123 06:56:45.526559 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" podUID="a57b9495-9a8d-4ec8-8a4d-92220d911386" containerName="controller-manager" containerID="cri-o://0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed" gracePeriod=30 Nov 23 06:56:45 crc kubenswrapper[4681]: I1123 06:56:45.632025 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r"] Nov 23 06:56:45 crc kubenswrapper[4681]: I1123 06:56:45.648803 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" podUID="a8450c87-7b9b-47cf-86ce-145ef517f494" containerName="route-controller-manager" containerID="cri-o://8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9" gracePeriod=30 Nov 23 06:56:45 crc kubenswrapper[4681]: I1123 06:56:45.964618 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.000533 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.103197 4681 generic.go:334] "Generic (PLEG): container finished" podID="a57b9495-9a8d-4ec8-8a4d-92220d911386" containerID="0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed" exitCode=0 Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.103814 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" event={"ID":"a57b9495-9a8d-4ec8-8a4d-92220d911386","Type":"ContainerDied","Data":"0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed"} Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.103885 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" event={"ID":"a57b9495-9a8d-4ec8-8a4d-92220d911386","Type":"ContainerDied","Data":"4252dcd9d2157e65c6dd9a018da5f36eaff0bf3be2f6724b3bb865e8eebe787e"} Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.103912 4681 scope.go:117] "RemoveContainer" containerID="0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.104330 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nk54m" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.106176 4681 generic.go:334] "Generic (PLEG): container finished" podID="a8450c87-7b9b-47cf-86ce-145ef517f494" containerID="8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9" exitCode=0 Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.106297 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" event={"ID":"a8450c87-7b9b-47cf-86ce-145ef517f494","Type":"ContainerDied","Data":"8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9"} Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.106393 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" event={"ID":"a8450c87-7b9b-47cf-86ce-145ef517f494","Type":"ContainerDied","Data":"378948b61b664181e0b15d3b5c5a9ccc73f160ec5764246278e6c054e58757e4"} Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.106335 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126308 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4lm\" (UniqueName: \"kubernetes.io/projected/a57b9495-9a8d-4ec8-8a4d-92220d911386-kube-api-access-7j4lm\") pod \"a57b9495-9a8d-4ec8-8a4d-92220d911386\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126400 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk2h8\" (UniqueName: \"kubernetes.io/projected/a8450c87-7b9b-47cf-86ce-145ef517f494-kube-api-access-dk2h8\") pod \"a8450c87-7b9b-47cf-86ce-145ef517f494\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126431 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8450c87-7b9b-47cf-86ce-145ef517f494-serving-cert\") pod \"a8450c87-7b9b-47cf-86ce-145ef517f494\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126491 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-client-ca\") pod \"a8450c87-7b9b-47cf-86ce-145ef517f494\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126542 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-client-ca\") pod \"a57b9495-9a8d-4ec8-8a4d-92220d911386\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126564 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a57b9495-9a8d-4ec8-8a4d-92220d911386-serving-cert\") pod \"a57b9495-9a8d-4ec8-8a4d-92220d911386\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126594 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-proxy-ca-bundles\") pod \"a57b9495-9a8d-4ec8-8a4d-92220d911386\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126651 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-config\") pod \"a8450c87-7b9b-47cf-86ce-145ef517f494\" (UID: \"a8450c87-7b9b-47cf-86ce-145ef517f494\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.126714 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-config\") pod \"a57b9495-9a8d-4ec8-8a4d-92220d911386\" (UID: \"a57b9495-9a8d-4ec8-8a4d-92220d911386\") " Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.127669 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-client-ca" (OuterVolumeSpecName: "client-ca") pod "a57b9495-9a8d-4ec8-8a4d-92220d911386" (UID: "a57b9495-9a8d-4ec8-8a4d-92220d911386"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.128219 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.128335 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a57b9495-9a8d-4ec8-8a4d-92220d911386" (UID: "a57b9495-9a8d-4ec8-8a4d-92220d911386"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.130662 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8450c87-7b9b-47cf-86ce-145ef517f494" (UID: "a8450c87-7b9b-47cf-86ce-145ef517f494"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.131021 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-config" (OuterVolumeSpecName: "config") pod "a8450c87-7b9b-47cf-86ce-145ef517f494" (UID: "a8450c87-7b9b-47cf-86ce-145ef517f494"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.131040 4681 scope.go:117] "RemoveContainer" containerID="0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.131353 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-config" (OuterVolumeSpecName: "config") pod "a57b9495-9a8d-4ec8-8a4d-92220d911386" (UID: "a57b9495-9a8d-4ec8-8a4d-92220d911386"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: E1123 06:56:46.131723 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed\": container with ID starting with 0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed not found: ID does not exist" containerID="0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.131763 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed"} err="failed to get container status \"0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed\": rpc error: code = NotFound desc = could not find container \"0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed\": container with ID starting with 0859f21391197b0805c900d393f230deaeacbf02ebfd56a83b27fc9e3323f8ed not found: ID does not exist" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.131788 4681 scope.go:117] "RemoveContainer" containerID="8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.134821 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8450c87-7b9b-47cf-86ce-145ef517f494-kube-api-access-dk2h8" (OuterVolumeSpecName: "kube-api-access-dk2h8") pod "a8450c87-7b9b-47cf-86ce-145ef517f494" (UID: "a8450c87-7b9b-47cf-86ce-145ef517f494"). InnerVolumeSpecName "kube-api-access-dk2h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.135056 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8450c87-7b9b-47cf-86ce-145ef517f494-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8450c87-7b9b-47cf-86ce-145ef517f494" (UID: "a8450c87-7b9b-47cf-86ce-145ef517f494"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.135936 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a57b9495-9a8d-4ec8-8a4d-92220d911386-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a57b9495-9a8d-4ec8-8a4d-92220d911386" (UID: "a57b9495-9a8d-4ec8-8a4d-92220d911386"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.140582 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a57b9495-9a8d-4ec8-8a4d-92220d911386-kube-api-access-7j4lm" (OuterVolumeSpecName: "kube-api-access-7j4lm") pod "a57b9495-9a8d-4ec8-8a4d-92220d911386" (UID: "a57b9495-9a8d-4ec8-8a4d-92220d911386"). InnerVolumeSpecName "kube-api-access-7j4lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.146655 4681 scope.go:117] "RemoveContainer" containerID="8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9" Nov 23 06:56:46 crc kubenswrapper[4681]: E1123 06:56:46.146982 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9\": container with ID starting with 8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9 not found: ID does not exist" containerID="8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.147030 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9"} err="failed to get container status \"8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9\": rpc error: code = NotFound desc = could not find container \"8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9\": container with ID starting with 8846e11b7789beecb5ddf1e21b9ee3e33b0b36285b490f65145047885a0e98b9 not found: ID does not exist" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.173505 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-tjgp6" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229849 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229874 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229885 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j4lm\" (UniqueName: \"kubernetes.io/projected/a57b9495-9a8d-4ec8-8a4d-92220d911386-kube-api-access-7j4lm\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229900 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk2h8\" (UniqueName: \"kubernetes.io/projected/a8450c87-7b9b-47cf-86ce-145ef517f494-kube-api-access-dk2h8\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229911 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8450c87-7b9b-47cf-86ce-145ef517f494-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229921 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8450c87-7b9b-47cf-86ce-145ef517f494-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229938 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a57b9495-9a8d-4ec8-8a4d-92220d911386-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.229948 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a57b9495-9a8d-4ec8-8a4d-92220d911386-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.233077 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-zmhkz" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.277106 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8464cf66df-4xdkl" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.439618 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nk54m"] Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.444193 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nk54m"] Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.451046 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r"] Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.451073 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9qp5r"] Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.465904 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-fwp7j" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.957139 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66dbb4d98d-9p257"] Nov 23 06:56:46 crc kubenswrapper[4681]: E1123 06:56:46.957637 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8450c87-7b9b-47cf-86ce-145ef517f494" containerName="route-controller-manager" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.957658 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8450c87-7b9b-47cf-86ce-145ef517f494" containerName="route-controller-manager" Nov 23 06:56:46 crc kubenswrapper[4681]: E1123 06:56:46.957728 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57b9495-9a8d-4ec8-8a4d-92220d911386" containerName="controller-manager" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.957735 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57b9495-9a8d-4ec8-8a4d-92220d911386" containerName="controller-manager" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.957916 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57b9495-9a8d-4ec8-8a4d-92220d911386" containerName="controller-manager" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.957931 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8450c87-7b9b-47cf-86ce-145ef517f494" containerName="route-controller-manager" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.958708 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.960708 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.960983 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.962643 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl"] Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.963867 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.967972 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.968240 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.968680 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.968936 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.971902 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl"] Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.972170 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.972181 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.972323 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.972863 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.974756 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.979198 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.979348 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 06:56:46 crc kubenswrapper[4681]: I1123 06:56:46.985045 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66dbb4d98d-9p257"] Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.042550 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63910259-2b8e-4406-8189-f5812243a162-serving-cert\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.042630 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-proxy-ca-bundles\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.042674 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63910259-2b8e-4406-8189-f5812243a162-config\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.042693 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63910259-2b8e-4406-8189-f5812243a162-client-ca\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.042871 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rm8\" (UniqueName: \"kubernetes.io/projected/bbfebf37-8720-461b-9a92-02ad83a21b1c-kube-api-access-24rm8\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.043009 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbfebf37-8720-461b-9a92-02ad83a21b1c-serving-cert\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.043087 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77grh\" (UniqueName: \"kubernetes.io/projected/63910259-2b8e-4406-8189-f5812243a162-kube-api-access-77grh\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.043128 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-client-ca\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.043160 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-config\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144262 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-proxy-ca-bundles\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144338 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63910259-2b8e-4406-8189-f5812243a162-config\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144363 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63910259-2b8e-4406-8189-f5812243a162-client-ca\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144418 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rm8\" (UniqueName: \"kubernetes.io/projected/bbfebf37-8720-461b-9a92-02ad83a21b1c-kube-api-access-24rm8\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144500 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbfebf37-8720-461b-9a92-02ad83a21b1c-serving-cert\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144552 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77grh\" (UniqueName: \"kubernetes.io/projected/63910259-2b8e-4406-8189-f5812243a162-kube-api-access-77grh\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144582 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-client-ca\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144613 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-config\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.144655 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63910259-2b8e-4406-8189-f5812243a162-serving-cert\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.145732 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-proxy-ca-bundles\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.145824 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-client-ca\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.145913 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63910259-2b8e-4406-8189-f5812243a162-config\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.146532 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63910259-2b8e-4406-8189-f5812243a162-client-ca\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.146808 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfebf37-8720-461b-9a92-02ad83a21b1c-config\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.150426 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbfebf37-8720-461b-9a92-02ad83a21b1c-serving-cert\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.151402 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63910259-2b8e-4406-8189-f5812243a162-serving-cert\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.159745 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77grh\" (UniqueName: \"kubernetes.io/projected/63910259-2b8e-4406-8189-f5812243a162-kube-api-access-77grh\") pod \"route-controller-manager-7c454ff49f-489nl\" (UID: \"63910259-2b8e-4406-8189-f5812243a162\") " pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.162670 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rm8\" (UniqueName: \"kubernetes.io/projected/bbfebf37-8720-461b-9a92-02ad83a21b1c-kube-api-access-24rm8\") pod \"controller-manager-66dbb4d98d-9p257\" (UID: \"bbfebf37-8720-461b-9a92-02ad83a21b1c\") " pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.260859 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a57b9495-9a8d-4ec8-8a4d-92220d911386" path="/var/lib/kubelet/pods/a57b9495-9a8d-4ec8-8a4d-92220d911386/volumes" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.261856 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8450c87-7b9b-47cf-86ce-145ef517f494" path="/var/lib/kubelet/pods/a8450c87-7b9b-47cf-86ce-145ef517f494/volumes" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.276294 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.285395 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.722314 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl"] Nov 23 06:56:47 crc kubenswrapper[4681]: W1123 06:56:47.749640 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63910259_2b8e_4406_8189_f5812243a162.slice/crio-37bb9de8039e377ad3b88f761b01a26fe45ce4dac0699b8fdc82566e12bd39c0 WatchSource:0}: Error finding container 37bb9de8039e377ad3b88f761b01a26fe45ce4dac0699b8fdc82566e12bd39c0: Status 404 returned error can't find the container with id 37bb9de8039e377ad3b88f761b01a26fe45ce4dac0699b8fdc82566e12bd39c0 Nov 23 06:56:47 crc kubenswrapper[4681]: I1123 06:56:47.780151 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66dbb4d98d-9p257"] Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.127579 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" event={"ID":"bbfebf37-8720-461b-9a92-02ad83a21b1c","Type":"ContainerStarted","Data":"983c8f3a85e259412e230a0130ab47a3527827703d7bc094574d0a859bbeb0be"} Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.127976 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" event={"ID":"bbfebf37-8720-461b-9a92-02ad83a21b1c","Type":"ContainerStarted","Data":"5c9784485dbcd060ba9878fbdf2e75cd26834af2fa3b3cbfc874787524763fde"} Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.128149 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.131094 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" event={"ID":"63910259-2b8e-4406-8189-f5812243a162","Type":"ContainerStarted","Data":"b71947b66e6ce02768f3e01ec5ecd01cee51bc38e3b5a8dfe1220b0bd0c5d040"} Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.131134 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" event={"ID":"63910259-2b8e-4406-8189-f5812243a162","Type":"ContainerStarted","Data":"37bb9de8039e377ad3b88f761b01a26fe45ce4dac0699b8fdc82566e12bd39c0"} Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.131406 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.138003 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.191998 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" podStartSLOduration=2.191977612 podStartE2EDuration="2.191977612s" podCreationTimestamp="2025-11-23 06:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:56:48.190487925 +0000 UTC m=+745.259997162" watchObservedRunningTime="2025-11-23 06:56:48.191977612 +0000 UTC m=+745.261486849" Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.198160 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66dbb4d98d-9p257" podStartSLOduration=3.198145725 podStartE2EDuration="3.198145725s" podCreationTimestamp="2025-11-23 06:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:56:48.159152248 +0000 UTC m=+745.228661485" watchObservedRunningTime="2025-11-23 06:56:48.198145725 +0000 UTC m=+745.267654962" Nov 23 06:56:48 crc kubenswrapper[4681]: I1123 06:56:48.372568 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c454ff49f-489nl" Nov 23 06:56:52 crc kubenswrapper[4681]: I1123 06:56:52.634810 4681 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.103417 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6794664cc7-668tk"] Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.105237 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.113281 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.116938 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6794664cc7-668tk"] Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.118110 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.122068 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fpqdc" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.122135 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.145772 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf2qw\" (UniqueName: \"kubernetes.io/projected/df2f7070-559d-4a37-b85e-7596aff7007d-kube-api-access-zf2qw\") pod \"dnsmasq-dns-6794664cc7-668tk\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.145835 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2f7070-559d-4a37-b85e-7596aff7007d-config\") pod \"dnsmasq-dns-6794664cc7-668tk\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.227708 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bd59c769-6gc7p"] Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.229406 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.236114 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.240756 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd59c769-6gc7p"] Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.246992 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-dns-svc\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.247078 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf2qw\" (UniqueName: \"kubernetes.io/projected/df2f7070-559d-4a37-b85e-7596aff7007d-kube-api-access-zf2qw\") pod \"dnsmasq-dns-6794664cc7-668tk\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.247104 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-config\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.247136 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2f7070-559d-4a37-b85e-7596aff7007d-config\") pod \"dnsmasq-dns-6794664cc7-668tk\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.247181 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvwzr\" (UniqueName: \"kubernetes.io/projected/8428f96f-f79d-4907-a7f2-b1a16505637d-kube-api-access-vvwzr\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.248382 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2f7070-559d-4a37-b85e-7596aff7007d-config\") pod \"dnsmasq-dns-6794664cc7-668tk\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.280334 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf2qw\" (UniqueName: \"kubernetes.io/projected/df2f7070-559d-4a37-b85e-7596aff7007d-kube-api-access-zf2qw\") pod \"dnsmasq-dns-6794664cc7-668tk\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.348728 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-config\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.349490 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvwzr\" (UniqueName: \"kubernetes.io/projected/8428f96f-f79d-4907-a7f2-b1a16505637d-kube-api-access-vvwzr\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.349586 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-dns-svc\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.349926 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-config\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.350473 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-dns-svc\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.365222 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvwzr\" (UniqueName: \"kubernetes.io/projected/8428f96f-f79d-4907-a7f2-b1a16505637d-kube-api-access-vvwzr\") pod \"dnsmasq-dns-84bd59c769-6gc7p\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.422973 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.557329 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.861203 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6794664cc7-668tk"] Nov 23 06:56:59 crc kubenswrapper[4681]: I1123 06:56:59.998012 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd59c769-6gc7p"] Nov 23 06:57:00 crc kubenswrapper[4681]: W1123 06:57:00.004690 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428f96f_f79d_4907_a7f2_b1a16505637d.slice/crio-0842f139ab04e2a503879df0e1c4333b5abebc35a83bd99f3721e1de4a0d4670 WatchSource:0}: Error finding container 0842f139ab04e2a503879df0e1c4333b5abebc35a83bd99f3721e1de4a0d4670: Status 404 returned error can't find the container with id 0842f139ab04e2a503879df0e1c4333b5abebc35a83bd99f3721e1de4a0d4670 Nov 23 06:57:00 crc kubenswrapper[4681]: I1123 06:57:00.232110 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" event={"ID":"8428f96f-f79d-4907-a7f2-b1a16505637d","Type":"ContainerStarted","Data":"0842f139ab04e2a503879df0e1c4333b5abebc35a83bd99f3721e1de4a0d4670"} Nov 23 06:57:00 crc kubenswrapper[4681]: I1123 06:57:00.234777 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6794664cc7-668tk" event={"ID":"df2f7070-559d-4a37-b85e-7596aff7007d","Type":"ContainerStarted","Data":"ab4100a6407044bab866504dcd0c9660527240d52e3768098e34ebda8d307ce0"} Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.153512 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6794664cc7-668tk"] Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.186678 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f47fdfb89-9n662"] Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.187890 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.214269 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f47fdfb89-9n662"] Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.316296 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-config\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.316403 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mbvs\" (UniqueName: \"kubernetes.io/projected/fc08105b-c173-411b-973a-02b4d771b928-kube-api-access-6mbvs\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.316631 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-dns-svc\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.422818 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-config\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.422871 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mbvs\" (UniqueName: \"kubernetes.io/projected/fc08105b-c173-411b-973a-02b4d771b928-kube-api-access-6mbvs\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.422929 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-dns-svc\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.426969 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-config\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.427361 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-dns-svc\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.458241 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mbvs\" (UniqueName: \"kubernetes.io/projected/fc08105b-c173-411b-973a-02b4d771b928-kube-api-access-6mbvs\") pod \"dnsmasq-dns-7f47fdfb89-9n662\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.515309 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd59c769-6gc7p"] Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.517939 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.553953 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-759c6cc4df-dzqg6"] Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.555137 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.611501 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759c6cc4df-dzqg6"] Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.733974 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94sg\" (UniqueName: \"kubernetes.io/projected/7cbcfef5-7505-4087-9c1a-330a353ffdef-kube-api-access-v94sg\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.734066 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-dns-svc\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.734094 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-config\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.835013 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v94sg\" (UniqueName: \"kubernetes.io/projected/7cbcfef5-7505-4087-9c1a-330a353ffdef-kube-api-access-v94sg\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.835070 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-dns-svc\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.835092 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-config\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.835876 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-config\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.836611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-dns-svc\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.857219 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v94sg\" (UniqueName: \"kubernetes.io/projected/7cbcfef5-7505-4087-9c1a-330a353ffdef-kube-api-access-v94sg\") pod \"dnsmasq-dns-759c6cc4df-dzqg6\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:02 crc kubenswrapper[4681]: I1123 06:57:02.879688 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.099717 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f47fdfb89-9n662"] Nov 23 06:57:03 crc kubenswrapper[4681]: W1123 06:57:03.117506 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc08105b_c173_411b_973a_02b4d771b928.slice/crio-6f0fed82cc0e9e3bf19ba22fc6adbafb7b0fd4765bd40b53371241a03c15a3bf WatchSource:0}: Error finding container 6f0fed82cc0e9e3bf19ba22fc6adbafb7b0fd4765bd40b53371241a03c15a3bf: Status 404 returned error can't find the container with id 6f0fed82cc0e9e3bf19ba22fc6adbafb7b0fd4765bd40b53371241a03c15a3bf Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.303577 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759c6cc4df-dzqg6"] Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.308291 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" event={"ID":"fc08105b-c173-411b-973a-02b4d771b928","Type":"ContainerStarted","Data":"6f0fed82cc0e9e3bf19ba22fc6adbafb7b0fd4765bd40b53371241a03c15a3bf"} Nov 23 06:57:03 crc kubenswrapper[4681]: W1123 06:57:03.309439 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cbcfef5_7505_4087_9c1a_330a353ffdef.slice/crio-36af7428afc1efbedda463e5ce672277e9ed17fa4319448ec4958255db307937 WatchSource:0}: Error finding container 36af7428afc1efbedda463e5ce672277e9ed17fa4319448ec4958255db307937: Status 404 returned error can't find the container with id 36af7428afc1efbedda463e5ce672277e9ed17fa4319448ec4958255db307937 Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.359245 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.361436 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.363392 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.363613 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.364759 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-n52w4" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.367613 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.367774 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.367818 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.367586 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.377073 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.547453 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.547553 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.547751 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.547895 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.547975 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.548020 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.548160 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.548248 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh2bt\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-kube-api-access-dh2bt\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.548286 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.548384 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.548570 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650211 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650350 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650422 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650476 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2bt\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-kube-api-access-dh2bt\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650500 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650527 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650619 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650730 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650764 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.650836 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.651242 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.651784 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.652147 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.653652 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.653948 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.663738 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.670007 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.670767 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.679586 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2bt\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-kube-api-access-dh2bt\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.683024 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.731248 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " pod="openstack/rabbitmq-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.733817 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.737348 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.739016 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.742269 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.742552 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.742703 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.742827 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.742996 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.743116 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xqkwc" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.750209 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.861083 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.861136 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e2ff794-284c-406f-a815-9efec112c044-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.861158 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24tjj\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-kube-api-access-24tjj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862023 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862137 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862181 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862266 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862525 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862607 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862656 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e2ff794-284c-406f-a815-9efec112c044-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.862684 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964331 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964406 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964474 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964510 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964550 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964578 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964605 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e2ff794-284c-406f-a815-9efec112c044-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964626 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964686 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e2ff794-284c-406f-a815-9efec112c044-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.964709 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24tjj\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-kube-api-access-24tjj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.965068 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.965634 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.966237 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.966988 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.971710 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.972002 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.972006 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.974302 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e2ff794-284c-406f-a815-9efec112c044-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.977070 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.977383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e2ff794-284c-406f-a815-9efec112c044-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:03 crc kubenswrapper[4681]: I1123 06:57:03.982109 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24tjj\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-kube-api-access-24tjj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.000834 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.012015 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.087395 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.338474 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" event={"ID":"7cbcfef5-7505-4087-9c1a-330a353ffdef","Type":"ContainerStarted","Data":"36af7428afc1efbedda463e5ce672277e9ed17fa4319448ec4958255db307937"} Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.525851 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 06:57:04 crc kubenswrapper[4681]: W1123 06:57:04.559671 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e93be3c_dcb6_4105_868c_645d5c8c7bd0.slice/crio-0c309c3a2fb20d9e6cd6dfa22c3e9f499bf0c80807912b6c62c79c16fb7e09b6 WatchSource:0}: Error finding container 0c309c3a2fb20d9e6cd6dfa22c3e9f499bf0c80807912b6c62c79c16fb7e09b6: Status 404 returned error can't find the container with id 0c309c3a2fb20d9e6cd6dfa22c3e9f499bf0c80807912b6c62c79c16fb7e09b6 Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.700371 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.952371 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.954154 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.965201 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.965624 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.965632 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.965828 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-v2ddv" Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.969551 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 06:57:04 crc kubenswrapper[4681]: I1123 06:57:04.972918 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.085989 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.086054 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99v8g\" (UniqueName: \"kubernetes.io/projected/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-kube-api-access-99v8g\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.086089 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.086125 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-config-data-default\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.086402 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.086738 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.089692 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.089757 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-kolla-config\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.196832 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.196889 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-kolla-config\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.197008 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.197066 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99v8g\" (UniqueName: \"kubernetes.io/projected/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-kube-api-access-99v8g\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.197103 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.197152 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-config-data-default\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.197177 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.197194 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.198187 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.198689 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-kolla-config\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.199212 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-config-data-default\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.200788 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.206120 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.213178 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.228014 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99v8g\" (UniqueName: \"kubernetes.io/projected/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-kube-api-access-99v8g\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.256639 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/060e8340-b39a-4aec-9d9a-e6b8dc616c8b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.269900 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"060e8340-b39a-4aec-9d9a-e6b8dc616c8b\") " pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.282383 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.421996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e93be3c-dcb6-4105-868c-645d5c8c7bd0","Type":"ContainerStarted","Data":"0c309c3a2fb20d9e6cd6dfa22c3e9f499bf0c80807912b6c62c79c16fb7e09b6"} Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.429977 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6e2ff794-284c-406f-a815-9efec112c044","Type":"ContainerStarted","Data":"19a56aa27c4b87fe8c83bd0ac06d5484b097b4e704d9fab5a438039e97e589c2"} Nov 23 06:57:05 crc kubenswrapper[4681]: I1123 06:57:05.839962 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.307593 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.309923 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.313963 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.314243 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hv6m4" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.314379 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.314607 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.317696 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.437612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b692a2-d830-4d67-8f4f-412ea64732f0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.437659 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b692a2-d830-4d67-8f4f-412ea64732f0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.437719 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.437816 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf7bt\" (UniqueName: \"kubernetes.io/projected/82b692a2-d830-4d67-8f4f-412ea64732f0-kube-api-access-jf7bt\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.437984 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.438089 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.438166 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.438243 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/82b692a2-d830-4d67-8f4f-412ea64732f0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.462094 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"060e8340-b39a-4aec-9d9a-e6b8dc616c8b","Type":"ContainerStarted","Data":"cbf95affac0abf07cdd47daede81be5e0f81b016361f7adc2df7d088d939f1fc"} Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.541827 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.541921 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf7bt\" (UniqueName: \"kubernetes.io/projected/82b692a2-d830-4d67-8f4f-412ea64732f0-kube-api-access-jf7bt\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.542128 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.542248 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.542295 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.542366 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/82b692a2-d830-4d67-8f4f-412ea64732f0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.544806 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.544898 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.548622 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/82b692a2-d830-4d67-8f4f-412ea64732f0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.549383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.550559 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b692a2-d830-4d67-8f4f-412ea64732f0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.550584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b692a2-d830-4d67-8f4f-412ea64732f0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.552919 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b692a2-d830-4d67-8f4f-412ea64732f0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.562095 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b692a2-d830-4d67-8f4f-412ea64732f0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.568221 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.569555 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.579243 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hndx8" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.579493 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.579956 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.583718 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b692a2-d830-4d67-8f4f-412ea64732f0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.585000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf7bt\" (UniqueName: \"kubernetes.io/projected/82b692a2-d830-4d67-8f4f-412ea64732f0-kube-api-access-jf7bt\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.618830 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"82b692a2-d830-4d67-8f4f-412ea64732f0\") " pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.635793 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.648307 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.655874 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d962891-0f50-49d9-baac-7d9262edb968-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.655959 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjt2\" (UniqueName: \"kubernetes.io/projected/6d962891-0f50-49d9-baac-7d9262edb968-kube-api-access-dqjt2\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.656014 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d962891-0f50-49d9-baac-7d9262edb968-kolla-config\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.656032 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d962891-0f50-49d9-baac-7d9262edb968-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.656072 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d962891-0f50-49d9-baac-7d9262edb968-config-data\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.758393 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqjt2\" (UniqueName: \"kubernetes.io/projected/6d962891-0f50-49d9-baac-7d9262edb968-kube-api-access-dqjt2\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.758509 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d962891-0f50-49d9-baac-7d9262edb968-kolla-config\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.758545 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d962891-0f50-49d9-baac-7d9262edb968-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.758643 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d962891-0f50-49d9-baac-7d9262edb968-config-data\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.758715 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d962891-0f50-49d9-baac-7d9262edb968-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.760200 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d962891-0f50-49d9-baac-7d9262edb968-kolla-config\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.762305 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d962891-0f50-49d9-baac-7d9262edb968-config-data\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.766755 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d962891-0f50-49d9-baac-7d9262edb968-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.768943 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d962891-0f50-49d9-baac-7d9262edb968-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.827914 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqjt2\" (UniqueName: \"kubernetes.io/projected/6d962891-0f50-49d9-baac-7d9262edb968-kube-api-access-dqjt2\") pod \"memcached-0\" (UID: \"6d962891-0f50-49d9-baac-7d9262edb968\") " pod="openstack/memcached-0" Nov 23 06:57:06 crc kubenswrapper[4681]: I1123 06:57:06.976047 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 06:57:07 crc kubenswrapper[4681]: I1123 06:57:07.445743 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 06:57:07 crc kubenswrapper[4681]: W1123 06:57:07.466913 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82b692a2_d830_4d67_8f4f_412ea64732f0.slice/crio-c6dd5ff33cc81d7a3bc542c99d5702f08a9a63a49e9d3599c80e5665c21ece05 WatchSource:0}: Error finding container c6dd5ff33cc81d7a3bc542c99d5702f08a9a63a49e9d3599c80e5665c21ece05: Status 404 returned error can't find the container with id c6dd5ff33cc81d7a3bc542c99d5702f08a9a63a49e9d3599c80e5665c21ece05 Nov 23 06:57:07 crc kubenswrapper[4681]: I1123 06:57:07.629100 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.353415 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.354505 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.357615 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bgs4z" Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.410627 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhccv\" (UniqueName: \"kubernetes.io/projected/e4a72c64-9f8e-4403-b7e6-d78132e69cec-kube-api-access-dhccv\") pod \"kube-state-metrics-0\" (UID: \"e4a72c64-9f8e-4403-b7e6-d78132e69cec\") " pod="openstack/kube-state-metrics-0" Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.440557 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.518133 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhccv\" (UniqueName: \"kubernetes.io/projected/e4a72c64-9f8e-4403-b7e6-d78132e69cec-kube-api-access-dhccv\") pod \"kube-state-metrics-0\" (UID: \"e4a72c64-9f8e-4403-b7e6-d78132e69cec\") " pod="openstack/kube-state-metrics-0" Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.580845 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhccv\" (UniqueName: \"kubernetes.io/projected/e4a72c64-9f8e-4403-b7e6-d78132e69cec-kube-api-access-dhccv\") pod \"kube-state-metrics-0\" (UID: \"e4a72c64-9f8e-4403-b7e6-d78132e69cec\") " pod="openstack/kube-state-metrics-0" Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.584927 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6d962891-0f50-49d9-baac-7d9262edb968","Type":"ContainerStarted","Data":"4a98863913803b1ad0df5cd3a1f2be800e06848030d53ec85914452b14feaeeb"} Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.599586 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"82b692a2-d830-4d67-8f4f-412ea64732f0","Type":"ContainerStarted","Data":"c6dd5ff33cc81d7a3bc542c99d5702f08a9a63a49e9d3599c80e5665c21ece05"} Nov 23 06:57:08 crc kubenswrapper[4681]: I1123 06:57:08.690108 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 06:57:09 crc kubenswrapper[4681]: I1123 06:57:09.243066 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 06:57:09 crc kubenswrapper[4681]: I1123 06:57:09.639786 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4a72c64-9f8e-4403-b7e6-d78132e69cec","Type":"ContainerStarted","Data":"a5c4f60dd2bdc8b600f11687a1349731e08ced99b4111489f5c2a241b24349e4"} Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.003733 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ns6l"] Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.008968 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.021113 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ns6l"] Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.072159 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn47x\" (UniqueName: \"kubernetes.io/projected/71234289-c188-4210-959c-41708f14cc66-kube-api-access-mn47x\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.072213 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-catalog-content\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.072288 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-utilities\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.173977 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn47x\" (UniqueName: \"kubernetes.io/projected/71234289-c188-4210-959c-41708f14cc66-kube-api-access-mn47x\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.174039 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-catalog-content\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.174121 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-utilities\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.175369 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-utilities\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.175385 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-catalog-content\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.219402 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn47x\" (UniqueName: \"kubernetes.io/projected/71234289-c188-4210-959c-41708f14cc66-kube-api-access-mn47x\") pod \"redhat-marketplace-9ns6l\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:11 crc kubenswrapper[4681]: I1123 06:57:11.341453 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.056077 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n28qz"] Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.058627 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.060272 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-c2g6s" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.064083 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.065766 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.073933 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xhmlv"] Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.083180 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.095110 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz"] Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.136310 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xhmlv"] Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.213845 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-run\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214038 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-log\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214145 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfabf028-28e8-48fa-9536-a0e02622dc92-scripts\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214264 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7ww\" (UniqueName: \"kubernetes.io/projected/cfabf028-28e8-48fa-9536-a0e02622dc92-kube-api-access-kh7ww\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214302 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-log-ovn\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214488 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-run\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214522 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9105f410-146a-49f8-9d52-7645e05430ef-scripts\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214568 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfabf028-28e8-48fa-9536-a0e02622dc92-ovn-controller-tls-certs\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-etc-ovs\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214725 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfabf028-28e8-48fa-9536-a0e02622dc92-combined-ca-bundle\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214801 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-lib\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214834 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76kv\" (UniqueName: \"kubernetes.io/projected/9105f410-146a-49f8-9d52-7645e05430ef-kube-api-access-k76kv\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.214898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-run-ovn\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.296399 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.296491 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.296552 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.297193 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a5abade0c31450ea18cad45860310cd823c68e49534b39a64b21095b8821bf8"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.297263 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://2a5abade0c31450ea18cad45860310cd823c68e49534b39a64b21095b8821bf8" gracePeriod=600 Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.322588 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-log\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.323891 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-log\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.327357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfabf028-28e8-48fa-9536-a0e02622dc92-scripts\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.327749 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh7ww\" (UniqueName: \"kubernetes.io/projected/cfabf028-28e8-48fa-9536-a0e02622dc92-kube-api-access-kh7ww\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.327914 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-log-ovn\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.328120 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-run\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.328294 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9105f410-146a-49f8-9d52-7645e05430ef-scripts\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.328498 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfabf028-28e8-48fa-9536-a0e02622dc92-ovn-controller-tls-certs\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.328645 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-etc-ovs\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.328879 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfabf028-28e8-48fa-9536-a0e02622dc92-combined-ca-bundle\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.329181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-lib\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.329315 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76kv\" (UniqueName: \"kubernetes.io/projected/9105f410-146a-49f8-9d52-7645e05430ef-kube-api-access-k76kv\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.329716 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-run-ovn\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.329816 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-run\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.329894 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfabf028-28e8-48fa-9536-a0e02622dc92-scripts\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.332891 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-lib\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.332919 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9105f410-146a-49f8-9d52-7645e05430ef-scripts\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.334388 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-run\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.334403 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-var-run\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.334954 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-run-ovn\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.335618 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cfabf028-28e8-48fa-9536-a0e02622dc92-var-log-ovn\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.335797 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9105f410-146a-49f8-9d52-7645e05430ef-etc-ovs\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.344941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfabf028-28e8-48fa-9536-a0e02622dc92-combined-ca-bundle\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.357538 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfabf028-28e8-48fa-9536-a0e02622dc92-ovn-controller-tls-certs\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.361798 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76kv\" (UniqueName: \"kubernetes.io/projected/9105f410-146a-49f8-9d52-7645e05430ef-kube-api-access-k76kv\") pod \"ovn-controller-ovs-xhmlv\" (UID: \"9105f410-146a-49f8-9d52-7645e05430ef\") " pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.374426 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh7ww\" (UniqueName: \"kubernetes.io/projected/cfabf028-28e8-48fa-9536-a0e02622dc92-kube-api-access-kh7ww\") pod \"ovn-controller-n28qz\" (UID: \"cfabf028-28e8-48fa-9536-a0e02622dc92\") " pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.387190 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.421482 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.685568 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="2a5abade0c31450ea18cad45860310cd823c68e49534b39a64b21095b8821bf8" exitCode=0 Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.685651 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"2a5abade0c31450ea18cad45860310cd823c68e49534b39a64b21095b8821bf8"} Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.685750 4681 scope.go:117] "RemoveContainer" containerID="9fa8fec50b296212aef5b2ad5824bdfb0e0ff8b77199951e5391ad3ba5cad98c" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.925487 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.932536 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.935368 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.936028 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-782b9" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.936179 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.936324 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.936775 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 23 06:57:12 crc kubenswrapper[4681]: I1123 06:57:12.939564 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.043780 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279869c-b09d-48ed-8cb2-0cadf229a751-config\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.043900 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.043987 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpxnm\" (UniqueName: \"kubernetes.io/projected/6279869c-b09d-48ed-8cb2-0cadf229a751-kube-api-access-hpxnm\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.044107 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6279869c-b09d-48ed-8cb2-0cadf229a751-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.044356 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.044490 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.044576 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6279869c-b09d-48ed-8cb2-0cadf229a751-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.044623 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.146636 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.146689 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6279869c-b09d-48ed-8cb2-0cadf229a751-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.146731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.146812 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279869c-b09d-48ed-8cb2-0cadf229a751-config\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.146847 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.146872 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpxnm\" (UniqueName: \"kubernetes.io/projected/6279869c-b09d-48ed-8cb2-0cadf229a751-kube-api-access-hpxnm\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.146932 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6279869c-b09d-48ed-8cb2-0cadf229a751-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.147035 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.151330 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279869c-b09d-48ed-8cb2-0cadf229a751-config\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.151434 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6279869c-b09d-48ed-8cb2-0cadf229a751-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.151718 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.157942 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6279869c-b09d-48ed-8cb2-0cadf229a751-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.164148 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.170081 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.178671 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6279869c-b09d-48ed-8cb2-0cadf229a751-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.199769 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.211651 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpxnm\" (UniqueName: \"kubernetes.io/projected/6279869c-b09d-48ed-8cb2-0cadf229a751-kube-api-access-hpxnm\") pod \"ovsdbserver-nb-0\" (UID: \"6279869c-b09d-48ed-8cb2-0cadf229a751\") " pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:13 crc kubenswrapper[4681]: I1123 06:57:13.267200 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:14 crc kubenswrapper[4681]: I1123 06:57:14.611759 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ns6l"] Nov 23 06:57:14 crc kubenswrapper[4681]: W1123 06:57:14.900872 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71234289_c188_4210_959c_41708f14cc66.slice/crio-e9124d54e29030402a1fc463df3e39e00d400fc6849cce6d01bba514fa49e602 WatchSource:0}: Error finding container e9124d54e29030402a1fc463df3e39e00d400fc6849cce6d01bba514fa49e602: Status 404 returned error can't find the container with id e9124d54e29030402a1fc463df3e39e00d400fc6849cce6d01bba514fa49e602 Nov 23 06:57:14 crc kubenswrapper[4681]: I1123 06:57:14.983273 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz"] Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.532289 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xhmlv"] Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.740782 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"caed7cef552031860d421f500f9694e60cb9adcf543f62d9378ea4360e6a8866"} Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.744503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ns6l" event={"ID":"71234289-c188-4210-959c-41708f14cc66","Type":"ContainerStarted","Data":"e9124d54e29030402a1fc463df3e39e00d400fc6849cce6d01bba514fa49e602"} Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.821835 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.832915 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.833085 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.835750 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.836052 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.836359 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.839307 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ftl8n" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.944701 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.944769 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a67420-240e-45aa-a05c-fccea0bd8bea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.944821 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77pz5\" (UniqueName: \"kubernetes.io/projected/73a67420-240e-45aa-a05c-fccea0bd8bea-kube-api-access-77pz5\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.945112 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a67420-240e-45aa-a05c-fccea0bd8bea-config\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.945251 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.945284 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.945395 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:15 crc kubenswrapper[4681]: I1123 06:57:15.945419 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73a67420-240e-45aa-a05c-fccea0bd8bea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047526 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047614 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047675 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047698 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73a67420-240e-45aa-a05c-fccea0bd8bea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047788 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047830 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a67420-240e-45aa-a05c-fccea0bd8bea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047847 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77pz5\" (UniqueName: \"kubernetes.io/projected/73a67420-240e-45aa-a05c-fccea0bd8bea-kube-api-access-77pz5\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047933 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a67420-240e-45aa-a05c-fccea0bd8bea-config\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.047939 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.048734 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a67420-240e-45aa-a05c-fccea0bd8bea-config\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.049523 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a67420-240e-45aa-a05c-fccea0bd8bea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.050335 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73a67420-240e-45aa-a05c-fccea0bd8bea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.056443 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.056943 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.070338 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a67420-240e-45aa-a05c-fccea0bd8bea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.073436 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77pz5\" (UniqueName: \"kubernetes.io/projected/73a67420-240e-45aa-a05c-fccea0bd8bea-kube-api-access-77pz5\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.083038 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73a67420-240e-45aa-a05c-fccea0bd8bea\") " pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.164501 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:16 crc kubenswrapper[4681]: W1123 06:57:16.290538 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfabf028_28e8_48fa_9536_a0e02622dc92.slice/crio-b8f358eb0103db2b6c2e02a522a6fc270920c72aa98d720e8ac5d0c58fce731e WatchSource:0}: Error finding container b8f358eb0103db2b6c2e02a522a6fc270920c72aa98d720e8ac5d0c58fce731e: Status 404 returned error can't find the container with id b8f358eb0103db2b6c2e02a522a6fc270920c72aa98d720e8ac5d0c58fce731e Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.455691 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 06:57:16 crc kubenswrapper[4681]: I1123 06:57:16.753271 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz" event={"ID":"cfabf028-28e8-48fa-9536-a0e02622dc92","Type":"ContainerStarted","Data":"b8f358eb0103db2b6c2e02a522a6fc270920c72aa98d720e8ac5d0c58fce731e"} Nov 23 06:57:19 crc kubenswrapper[4681]: W1123 06:57:19.904191 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9105f410_146a_49f8_9d52_7645e05430ef.slice/crio-74288eb395ffe0f5308a8d99dc2f7b43008041affc6d331608bf3782ac2cf45a WatchSource:0}: Error finding container 74288eb395ffe0f5308a8d99dc2f7b43008041affc6d331608bf3782ac2cf45a: Status 404 returned error can't find the container with id 74288eb395ffe0f5308a8d99dc2f7b43008041affc6d331608bf3782ac2cf45a Nov 23 06:57:20 crc kubenswrapper[4681]: W1123 06:57:20.500876 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6279869c_b09d_48ed_8cb2_0cadf229a751.slice/crio-28cf20aa5ffd9514cda7cbbb0f854af4aced66da9d0e1bea1159b8c368f3404f WatchSource:0}: Error finding container 28cf20aa5ffd9514cda7cbbb0f854af4aced66da9d0e1bea1159b8c368f3404f: Status 404 returned error can't find the container with id 28cf20aa5ffd9514cda7cbbb0f854af4aced66da9d0e1bea1159b8c368f3404f Nov 23 06:57:20 crc kubenswrapper[4681]: I1123 06:57:20.791608 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6279869c-b09d-48ed-8cb2-0cadf229a751","Type":"ContainerStarted","Data":"28cf20aa5ffd9514cda7cbbb0f854af4aced66da9d0e1bea1159b8c368f3404f"} Nov 23 06:57:20 crc kubenswrapper[4681]: I1123 06:57:20.794788 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhmlv" event={"ID":"9105f410-146a-49f8-9d52-7645e05430ef","Type":"ContainerStarted","Data":"74288eb395ffe0f5308a8d99dc2f7b43008041affc6d331608bf3782ac2cf45a"} Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.266898 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q5dl6"] Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.268600 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.272569 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.276657 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q5dl6"] Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.361018 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-ovn-rundir\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.361068 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-combined-ca-bundle\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.361100 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-ovs-rundir\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.361479 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-config\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.361517 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgj6d\" (UniqueName: \"kubernetes.io/projected/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-kube-api-access-jgj6d\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.361592 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.442213 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759c6cc4df-dzqg6"] Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.463727 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.463817 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-ovn-rundir\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.463848 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-combined-ca-bundle\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.463870 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-ovs-rundir\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.463934 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-config\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.463951 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgj6d\" (UniqueName: \"kubernetes.io/projected/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-kube-api-access-jgj6d\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.464424 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-ovs-rundir\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.464443 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-ovn-rundir\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.464979 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-config\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.470989 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-combined-ca-bundle\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.470985 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.471748 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84d4c64565-zpxxw"] Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.473136 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.477813 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.485485 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgj6d\" (UniqueName: \"kubernetes.io/projected/43e7a8e3-e4fc-4b55-88dc-abcc98edb88a-kube-api-access-jgj6d\") pod \"ovn-controller-metrics-q5dl6\" (UID: \"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a\") " pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.494031 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84d4c64565-zpxxw"] Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.566586 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kf7g\" (UniqueName: \"kubernetes.io/projected/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-kube-api-access-7kf7g\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.566690 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-config\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.566881 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-ovsdbserver-nb\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.566920 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-dns-svc\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.587887 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q5dl6" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.669322 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-ovsdbserver-nb\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.669388 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-dns-svc\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.669439 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kf7g\" (UniqueName: \"kubernetes.io/projected/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-kube-api-access-7kf7g\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.669515 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-config\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.670347 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-ovsdbserver-nb\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.670399 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-dns-svc\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.670818 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-config\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.690395 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kf7g\" (UniqueName: \"kubernetes.io/projected/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-kube-api-access-7kf7g\") pod \"dnsmasq-dns-84d4c64565-zpxxw\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:25 crc kubenswrapper[4681]: I1123 06:57:25.831804 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.588763 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pxjhh"] Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.592263 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.629122 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxjhh"] Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.664346 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-catalog-content\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.664402 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-utilities\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.664448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rc2c\" (UniqueName: \"kubernetes.io/projected/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-kube-api-access-6rc2c\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.766864 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-catalog-content\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.766399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-catalog-content\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.769061 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-utilities\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.769175 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rc2c\" (UniqueName: \"kubernetes.io/projected/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-kube-api-access-6rc2c\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.769367 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-utilities\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.788026 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rc2c\" (UniqueName: \"kubernetes.io/projected/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-kube-api-access-6rc2c\") pod \"certified-operators-pxjhh\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:29 crc kubenswrapper[4681]: I1123 06:57:29.929655 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:30 crc kubenswrapper[4681]: E1123 06:57:30.554713 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:30 crc kubenswrapper[4681]: E1123 06:57:30.554796 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:30 crc kubenswrapper[4681]: E1123 06:57:30.555001 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:8e43c662a6abf8c9a07ada252f8dc6af,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99v8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(060e8340-b39a-4aec-9d9a-e6b8dc616c8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:57:30 crc kubenswrapper[4681]: E1123 06:57:30.556263 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="060e8340-b39a-4aec-9d9a-e6b8dc616c8b" Nov 23 06:57:30 crc kubenswrapper[4681]: E1123 06:57:30.900483 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/openstack-galera-0" podUID="060e8340-b39a-4aec-9d9a-e6b8dc616c8b" Nov 23 06:57:32 crc kubenswrapper[4681]: E1123 06:57:32.398554 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:32 crc kubenswrapper[4681]: E1123 06:57:32.398663 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:32 crc kubenswrapper[4681]: E1123 06:57:32.398990 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh2bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(7e93be3c-dcb6-4105-868c-645d5c8c7bd0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:57:32 crc kubenswrapper[4681]: E1123 06:57:32.400274 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" Nov 23 06:57:32 crc kubenswrapper[4681]: E1123 06:57:32.925088 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/rabbitmq-server-0" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.294211 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.294296 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.294491 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v94sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-759c6cc4df-dzqg6_openstack(7cbcfef5-7505-4087-9c1a-330a353ffdef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.295893 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" podUID="7cbcfef5-7505-4087-9c1a-330a353ffdef" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.303652 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.303698 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.303877 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvwzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bd59c769-6gc7p_openstack(8428f96f-f79d-4907-a7f2-b1a16505637d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.305046 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" podUID="8428f96f-f79d-4907-a7f2-b1a16505637d" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.332670 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.333057 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.333232 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24tjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(6e2ff794-284c-406f-a815-9efec112c044): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.334439 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6e2ff794-284c-406f-a815-9efec112c044" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.339773 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.339845 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.340012 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf2qw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6794664cc7-668tk_openstack(df2f7070-559d-4a37-b85e-7596aff7007d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.342802 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6794664cc7-668tk" podUID="df2f7070-559d-4a37-b85e-7596aff7007d" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.344370 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.344432 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.344597 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mbvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f47fdfb89-9n662_openstack(fc08105b-c173-411b-973a-02b4d771b928): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.345686 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" podUID="fc08105b-c173-411b-973a-02b4d771b928" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.953496 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" podUID="fc08105b-c173-411b-973a-02b4d771b928" Nov 23 06:57:33 crc kubenswrapper[4681]: E1123 06:57:33.953671 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6e2ff794-284c-406f-a815-9efec112c044" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.219035 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.229665 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.238920 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.289724 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-dns-svc\") pod \"8428f96f-f79d-4907-a7f2-b1a16505637d\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.289773 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2f7070-559d-4a37-b85e-7596aff7007d-config\") pod \"df2f7070-559d-4a37-b85e-7596aff7007d\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.289813 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-config\") pod \"7cbcfef5-7505-4087-9c1a-330a353ffdef\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.290033 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvwzr\" (UniqueName: \"kubernetes.io/projected/8428f96f-f79d-4907-a7f2-b1a16505637d-kube-api-access-vvwzr\") pod \"8428f96f-f79d-4907-a7f2-b1a16505637d\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.290169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-dns-svc\") pod \"7cbcfef5-7505-4087-9c1a-330a353ffdef\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.290342 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-config\") pod \"8428f96f-f79d-4907-a7f2-b1a16505637d\" (UID: \"8428f96f-f79d-4907-a7f2-b1a16505637d\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.290389 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf2qw\" (UniqueName: \"kubernetes.io/projected/df2f7070-559d-4a37-b85e-7596aff7007d-kube-api-access-zf2qw\") pod \"df2f7070-559d-4a37-b85e-7596aff7007d\" (UID: \"df2f7070-559d-4a37-b85e-7596aff7007d\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.290495 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v94sg\" (UniqueName: \"kubernetes.io/projected/7cbcfef5-7505-4087-9c1a-330a353ffdef-kube-api-access-v94sg\") pod \"7cbcfef5-7505-4087-9c1a-330a353ffdef\" (UID: \"7cbcfef5-7505-4087-9c1a-330a353ffdef\") " Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.292345 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-config" (OuterVolumeSpecName: "config") pod "7cbcfef5-7505-4087-9c1a-330a353ffdef" (UID: "7cbcfef5-7505-4087-9c1a-330a353ffdef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.292364 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7cbcfef5-7505-4087-9c1a-330a353ffdef" (UID: "7cbcfef5-7505-4087-9c1a-330a353ffdef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.292752 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2f7070-559d-4a37-b85e-7596aff7007d-config" (OuterVolumeSpecName: "config") pod "df2f7070-559d-4a37-b85e-7596aff7007d" (UID: "df2f7070-559d-4a37-b85e-7596aff7007d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.294299 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-config" (OuterVolumeSpecName: "config") pod "8428f96f-f79d-4907-a7f2-b1a16505637d" (UID: "8428f96f-f79d-4907-a7f2-b1a16505637d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.294836 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8428f96f-f79d-4907-a7f2-b1a16505637d" (UID: "8428f96f-f79d-4907-a7f2-b1a16505637d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.299830 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cbcfef5-7505-4087-9c1a-330a353ffdef-kube-api-access-v94sg" (OuterVolumeSpecName: "kube-api-access-v94sg") pod "7cbcfef5-7505-4087-9c1a-330a353ffdef" (UID: "7cbcfef5-7505-4087-9c1a-330a353ffdef"). InnerVolumeSpecName "kube-api-access-v94sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.300273 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8428f96f-f79d-4907-a7f2-b1a16505637d-kube-api-access-vvwzr" (OuterVolumeSpecName: "kube-api-access-vvwzr") pod "8428f96f-f79d-4907-a7f2-b1a16505637d" (UID: "8428f96f-f79d-4907-a7f2-b1a16505637d"). InnerVolumeSpecName "kube-api-access-vvwzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.311416 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2f7070-559d-4a37-b85e-7596aff7007d-kube-api-access-zf2qw" (OuterVolumeSpecName: "kube-api-access-zf2qw") pod "df2f7070-559d-4a37-b85e-7596aff7007d" (UID: "df2f7070-559d-4a37-b85e-7596aff7007d"). InnerVolumeSpecName "kube-api-access-zf2qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.393980 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.394010 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2f7070-559d-4a37-b85e-7596aff7007d-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.394021 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.394031 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvwzr\" (UniqueName: \"kubernetes.io/projected/8428f96f-f79d-4907-a7f2-b1a16505637d-kube-api-access-vvwzr\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.394041 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cbcfef5-7505-4087-9c1a-330a353ffdef-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.394050 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8428f96f-f79d-4907-a7f2-b1a16505637d-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.394059 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf2qw\" (UniqueName: \"kubernetes.io/projected/df2f7070-559d-4a37-b85e-7596aff7007d-kube-api-access-zf2qw\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.394071 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v94sg\" (UniqueName: \"kubernetes.io/projected/7cbcfef5-7505-4087-9c1a-330a353ffdef-kube-api-access-v94sg\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.421063 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q5dl6"] Nov 23 06:57:35 crc kubenswrapper[4681]: W1123 06:57:35.673291 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e7a8e3_e4fc_4b55_88dc_abcc98edb88a.slice/crio-f1c3c6232f1caa55c7f33cd8ce6e6a33ddca52529544d79af297d62f9572cc88 WatchSource:0}: Error finding container f1c3c6232f1caa55c7f33cd8ce6e6a33ddca52529544d79af297d62f9572cc88: Status 404 returned error can't find the container with id f1c3c6232f1caa55c7f33cd8ce6e6a33ddca52529544d79af297d62f9572cc88 Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.973174 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6794664cc7-668tk" Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.973167 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6794664cc7-668tk" event={"ID":"df2f7070-559d-4a37-b85e-7596aff7007d","Type":"ContainerDied","Data":"ab4100a6407044bab866504dcd0c9660527240d52e3768098e34ebda8d307ce0"} Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.982026 4681 generic.go:334] "Generic (PLEG): container finished" podID="71234289-c188-4210-959c-41708f14cc66" containerID="eababc0403b9ea48068195f13e4a54f50bbac1012477be6c1757da52e6103a73" exitCode=0 Nov 23 06:57:35 crc kubenswrapper[4681]: I1123 06:57:35.982153 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ns6l" event={"ID":"71234289-c188-4210-959c-41708f14cc66","Type":"ContainerDied","Data":"eababc0403b9ea48068195f13e4a54f50bbac1012477be6c1757da52e6103a73"} Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.014580 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" event={"ID":"7cbcfef5-7505-4087-9c1a-330a353ffdef","Type":"ContainerDied","Data":"36af7428afc1efbedda463e5ce672277e9ed17fa4319448ec4958255db307937"} Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.014687 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759c6cc4df-dzqg6" Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.053995 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q5dl6" event={"ID":"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a","Type":"ContainerStarted","Data":"f1c3c6232f1caa55c7f33cd8ce6e6a33ddca52529544d79af297d62f9572cc88"} Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.135302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" event={"ID":"8428f96f-f79d-4907-a7f2-b1a16505637d","Type":"ContainerDied","Data":"0842f139ab04e2a503879df0e1c4333b5abebc35a83bd99f3721e1de4a0d4670"} Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.135423 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd59c769-6gc7p" Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.136117 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6794664cc7-668tk"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.149536 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6794664cc7-668tk"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.194349 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759c6cc4df-dzqg6"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.203940 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-759c6cc4df-dzqg6"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.210155 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxjhh"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.272968 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd59c769-6gc7p"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.287436 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bd59c769-6gc7p"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.386859 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84d4c64565-zpxxw"] Nov 23 06:57:36 crc kubenswrapper[4681]: I1123 06:57:36.401503 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.150493 4681 generic.go:334] "Generic (PLEG): container finished" podID="9105f410-146a-49f8-9d52-7645e05430ef" containerID="08f0767d3a4ba99056371ae0190b10aa61613d14b2a74210a9301bcedd3b6f54" exitCode=0 Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.150587 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhmlv" event={"ID":"9105f410-146a-49f8-9d52-7645e05430ef","Type":"ContainerDied","Data":"08f0767d3a4ba99056371ae0190b10aa61613d14b2a74210a9301bcedd3b6f54"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.158502 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4a72c64-9f8e-4403-b7e6-d78132e69cec","Type":"ContainerStarted","Data":"2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.158578 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.169766 4681 generic.go:334] "Generic (PLEG): container finished" podID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerID="18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437" exitCode=0 Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.169870 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxjhh" event={"ID":"06c9baf9-fa51-4d38-a5ce-15bc36e7e610","Type":"ContainerDied","Data":"18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.169962 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxjhh" event={"ID":"06c9baf9-fa51-4d38-a5ce-15bc36e7e610","Type":"ContainerStarted","Data":"b90cd5ffefc05de3963e706c10fee9f92a94c776b4396e6ec68bbf60c4b773c3"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.172899 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6d962891-0f50-49d9-baac-7d9262edb968","Type":"ContainerStarted","Data":"7b5f29ab789271406b8d331194c5364886582a762108baecd050f78d4983447f"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.172977 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.175022 4681 generic.go:334] "Generic (PLEG): container finished" podID="71234289-c188-4210-959c-41708f14cc66" containerID="0f6045961fd7a79a2f8b4d80df32015e8edf71b5a2e8bff24fedf33cd91d210e" exitCode=0 Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.175100 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ns6l" event={"ID":"71234289-c188-4210-959c-41708f14cc66","Type":"ContainerDied","Data":"0f6045961fd7a79a2f8b4d80df32015e8edf71b5a2e8bff24fedf33cd91d210e"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.177812 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.188807 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.224180163 podStartE2EDuration="29.188794172s" podCreationTimestamp="2025-11-23 06:57:08 +0000 UTC" firstStartedPulling="2025-11-23 06:57:09.261108756 +0000 UTC m=+766.330617993" lastFinishedPulling="2025-11-23 06:57:31.225722775 +0000 UTC m=+788.295232002" observedRunningTime="2025-11-23 06:57:37.181993931 +0000 UTC m=+794.251503168" watchObservedRunningTime="2025-11-23 06:57:37.188794172 +0000 UTC m=+794.258303409" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.193662 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6279869c-b09d-48ed-8cb2-0cadf229a751","Type":"ContainerStarted","Data":"c9a3b0ef68f7eb686d549f7a8a134cf3169f8beea7c71537fda38a2c931ac833"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.211004 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" event={"ID":"33139b07-8e6f-45bd-b1d3-e1c16ac57d43","Type":"ContainerStarted","Data":"965b9997a893460afe45e851cd436c1b0a2a83b1d7f0b3bb5b4e8b1490891535"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.225956 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"82b692a2-d830-4d67-8f4f-412ea64732f0","Type":"ContainerStarted","Data":"efdcf7253df6ee42d5c55ba9e3c86a916b9a32f902c3aa08cc4c5cadf05807f8"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.238455 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz" event={"ID":"cfabf028-28e8-48fa-9536-a0e02622dc92","Type":"ContainerStarted","Data":"42d7a4d1b625b63a8a4bebfd2b489cd1e7d19c7bc988ad338d79786960d8ad79"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.238621 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n28qz" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.242535 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73a67420-240e-45aa-a05c-fccea0bd8bea","Type":"ContainerStarted","Data":"86352937b70d804bb8d1cd2f7620c5ae09185e3b2278c63a649bf7c08ff59528"} Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.262488 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.2070453309999998 podStartE2EDuration="31.262475515s" podCreationTimestamp="2025-11-23 06:57:06 +0000 UTC" firstStartedPulling="2025-11-23 06:57:07.669150416 +0000 UTC m=+764.738659653" lastFinishedPulling="2025-11-23 06:57:35.724580599 +0000 UTC m=+792.794089837" observedRunningTime="2025-11-23 06:57:37.255748932 +0000 UTC m=+794.325258169" watchObservedRunningTime="2025-11-23 06:57:37.262475515 +0000 UTC m=+794.331984752" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.272752 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cbcfef5-7505-4087-9c1a-330a353ffdef" path="/var/lib/kubelet/pods/7cbcfef5-7505-4087-9c1a-330a353ffdef/volumes" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.277477 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n28qz" podStartSLOduration=6.748615098 podStartE2EDuration="26.277431166s" podCreationTimestamp="2025-11-23 06:57:11 +0000 UTC" firstStartedPulling="2025-11-23 06:57:16.293201636 +0000 UTC m=+773.362710874" lastFinishedPulling="2025-11-23 06:57:35.822017714 +0000 UTC m=+792.891526942" observedRunningTime="2025-11-23 06:57:37.274735051 +0000 UTC m=+794.344244288" watchObservedRunningTime="2025-11-23 06:57:37.277431166 +0000 UTC m=+794.346940404" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.293142 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8428f96f-f79d-4907-a7f2-b1a16505637d" path="/var/lib/kubelet/pods/8428f96f-f79d-4907-a7f2-b1a16505637d/volumes" Nov 23 06:57:37 crc kubenswrapper[4681]: I1123 06:57:37.294129 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2f7070-559d-4a37-b85e-7596aff7007d" path="/var/lib/kubelet/pods/df2f7070-559d-4a37-b85e-7596aff7007d/volumes" Nov 23 06:57:38 crc kubenswrapper[4681]: I1123 06:57:38.258860 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhmlv" event={"ID":"9105f410-146a-49f8-9d52-7645e05430ef","Type":"ContainerStarted","Data":"a66551badad13672e7d48d39057afb0218dce8bdc0b72ec0838d4a5e879a2b7a"} Nov 23 06:57:38 crc kubenswrapper[4681]: I1123 06:57:38.259256 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:38 crc kubenswrapper[4681]: I1123 06:57:38.259274 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:57:38 crc kubenswrapper[4681]: I1123 06:57:38.259284 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhmlv" event={"ID":"9105f410-146a-49f8-9d52-7645e05430ef","Type":"ContainerStarted","Data":"b162500b935720a60956141cb11cf0521ee5c906200259b4391795aef00b1ac3"} Nov 23 06:57:38 crc kubenswrapper[4681]: I1123 06:57:38.263183 4681 generic.go:334] "Generic (PLEG): container finished" podID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerID="b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0" exitCode=0 Nov 23 06:57:38 crc kubenswrapper[4681]: I1123 06:57:38.263804 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" event={"ID":"33139b07-8e6f-45bd-b1d3-e1c16ac57d43","Type":"ContainerDied","Data":"b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0"} Nov 23 06:57:38 crc kubenswrapper[4681]: I1123 06:57:38.287718 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xhmlv" podStartSLOduration=10.470948782 podStartE2EDuration="26.287687996s" podCreationTimestamp="2025-11-23 06:57:12 +0000 UTC" firstStartedPulling="2025-11-23 06:57:19.913201384 +0000 UTC m=+776.982710622" lastFinishedPulling="2025-11-23 06:57:35.7299406 +0000 UTC m=+792.799449836" observedRunningTime="2025-11-23 06:57:38.282403628 +0000 UTC m=+795.351912865" watchObservedRunningTime="2025-11-23 06:57:38.287687996 +0000 UTC m=+795.357197233" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.285351 4681 generic.go:334] "Generic (PLEG): container finished" podID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerID="21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2" exitCode=0 Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.285452 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxjhh" event={"ID":"06c9baf9-fa51-4d38-a5ce-15bc36e7e610","Type":"ContainerDied","Data":"21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.290935 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ns6l" event={"ID":"71234289-c188-4210-959c-41708f14cc66","Type":"ContainerStarted","Data":"537433c72bbbd44217b9899e24938bf175fa1d4cada3ddfe20f271b36eba6df1"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.293950 4681 generic.go:334] "Generic (PLEG): container finished" podID="82b692a2-d830-4d67-8f4f-412ea64732f0" containerID="efdcf7253df6ee42d5c55ba9e3c86a916b9a32f902c3aa08cc4c5cadf05807f8" exitCode=0 Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.294050 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"82b692a2-d830-4d67-8f4f-412ea64732f0","Type":"ContainerDied","Data":"efdcf7253df6ee42d5c55ba9e3c86a916b9a32f902c3aa08cc4c5cadf05807f8"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.298478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6279869c-b09d-48ed-8cb2-0cadf229a751","Type":"ContainerStarted","Data":"58bc92b76532b8a83026d1168ad5600e9d370ec3a0f79ca47140856689c630f4"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.306626 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q5dl6" event={"ID":"43e7a8e3-e4fc-4b55-88dc-abcc98edb88a","Type":"ContainerStarted","Data":"a31afeb91285fac96f26acaded79a0d163bab5e685f75b0631a1f1fad79b1159"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.317159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73a67420-240e-45aa-a05c-fccea0bd8bea","Type":"ContainerStarted","Data":"4b79286b30a2595393d349b2790e850e0014473874d5f216002a6c442cd65e01"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.317202 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73a67420-240e-45aa-a05c-fccea0bd8bea","Type":"ContainerStarted","Data":"7d3930999d1727c906686c46d56f4189d2043eb6b8cf65f2eaf4e5d3a9b62b48"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.327324 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" event={"ID":"33139b07-8e6f-45bd-b1d3-e1c16ac57d43","Type":"ContainerStarted","Data":"16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f"} Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.328206 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.354260 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q5dl6" podStartSLOduration=11.441247978 podStartE2EDuration="15.354239377s" podCreationTimestamp="2025-11-23 06:57:25 +0000 UTC" firstStartedPulling="2025-11-23 06:57:35.704496101 +0000 UTC m=+792.774005338" lastFinishedPulling="2025-11-23 06:57:39.617487501 +0000 UTC m=+796.686996737" observedRunningTime="2025-11-23 06:57:40.352603519 +0000 UTC m=+797.422112756" watchObservedRunningTime="2025-11-23 06:57:40.354239377 +0000 UTC m=+797.423748614" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.380858 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ns6l" podStartSLOduration=27.017913751 podStartE2EDuration="30.380842919s" podCreationTimestamp="2025-11-23 06:57:10 +0000 UTC" firstStartedPulling="2025-11-23 06:57:35.987041095 +0000 UTC m=+793.056550332" lastFinishedPulling="2025-11-23 06:57:39.349970264 +0000 UTC m=+796.419479500" observedRunningTime="2025-11-23 06:57:40.37561161 +0000 UTC m=+797.445120847" watchObservedRunningTime="2025-11-23 06:57:40.380842919 +0000 UTC m=+797.450352156" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.442826 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.290027626 podStartE2EDuration="26.442796763s" podCreationTimestamp="2025-11-23 06:57:14 +0000 UTC" firstStartedPulling="2025-11-23 06:57:36.410587218 +0000 UTC m=+793.480096454" lastFinishedPulling="2025-11-23 06:57:39.563356354 +0000 UTC m=+796.632865591" observedRunningTime="2025-11-23 06:57:40.434240169 +0000 UTC m=+797.503749407" watchObservedRunningTime="2025-11-23 06:57:40.442796763 +0000 UTC m=+797.512305999" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.454602 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.300572302 podStartE2EDuration="29.454578424s" podCreationTimestamp="2025-11-23 06:57:11 +0000 UTC" firstStartedPulling="2025-11-23 06:57:20.504653477 +0000 UTC m=+777.574162714" lastFinishedPulling="2025-11-23 06:57:39.658659598 +0000 UTC m=+796.728168836" observedRunningTime="2025-11-23 06:57:40.412075149 +0000 UTC m=+797.481584387" watchObservedRunningTime="2025-11-23 06:57:40.454578424 +0000 UTC m=+797.524087661" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.458169 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" podStartSLOduration=14.784008239 podStartE2EDuration="15.458157574s" podCreationTimestamp="2025-11-23 06:57:25 +0000 UTC" firstStartedPulling="2025-11-23 06:57:36.385641424 +0000 UTC m=+793.455150661" lastFinishedPulling="2025-11-23 06:57:37.05979076 +0000 UTC m=+794.129299996" observedRunningTime="2025-11-23 06:57:40.451772822 +0000 UTC m=+797.521282060" watchObservedRunningTime="2025-11-23 06:57:40.458157574 +0000 UTC m=+797.527666811" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.811963 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f47fdfb89-9n662"] Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.877636 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b8455895f-75wk5"] Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.879134 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.882638 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 23 06:57:40 crc kubenswrapper[4681]: I1123 06:57:40.901572 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8455895f-75wk5"] Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.056907 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-dns-svc\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.057039 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.057113 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.057232 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-config\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.057284 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5bfs\" (UniqueName: \"kubernetes.io/projected/2d0735b8-78b7-4885-b636-728b94aa282f-kube-api-access-d5bfs\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.160285 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-dns-svc\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.160425 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.167766 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-dns-svc\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.167895 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.168119 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-config\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.168182 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5bfs\" (UniqueName: \"kubernetes.io/projected/2d0735b8-78b7-4885-b636-728b94aa282f-kube-api-access-d5bfs\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.168712 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.169259 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.169349 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.169521 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-config\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.197615 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5bfs\" (UniqueName: \"kubernetes.io/projected/2d0735b8-78b7-4885-b636-728b94aa282f-kube-api-access-d5bfs\") pod \"dnsmasq-dns-5b8455895f-75wk5\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.224108 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.346928 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.348650 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.376717 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"82b692a2-d830-4d67-8f4f-412ea64732f0","Type":"ContainerStarted","Data":"48417ce4f8ef9fc06ad75cf396c2a205161e696ddaeac4f772a362da43b689a6"} Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.384571 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxjhh" event={"ID":"06c9baf9-fa51-4d38-a5ce-15bc36e7e610","Type":"ContainerStarted","Data":"d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a"} Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.425779 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.125972941 podStartE2EDuration="36.425762952s" podCreationTimestamp="2025-11-23 06:57:05 +0000 UTC" firstStartedPulling="2025-11-23 06:57:07.476083613 +0000 UTC m=+764.545592850" lastFinishedPulling="2025-11-23 06:57:35.775873624 +0000 UTC m=+792.845382861" observedRunningTime="2025-11-23 06:57:41.422617965 +0000 UTC m=+798.492127202" watchObservedRunningTime="2025-11-23 06:57:41.425762952 +0000 UTC m=+798.495272188" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.433612 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.470599 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pxjhh" podStartSLOduration=8.848985877 podStartE2EDuration="12.470583801s" podCreationTimestamp="2025-11-23 06:57:29 +0000 UTC" firstStartedPulling="2025-11-23 06:57:37.177569846 +0000 UTC m=+794.247079084" lastFinishedPulling="2025-11-23 06:57:40.799167771 +0000 UTC m=+797.868677008" observedRunningTime="2025-11-23 06:57:41.452726168 +0000 UTC m=+798.522235406" watchObservedRunningTime="2025-11-23 06:57:41.470583801 +0000 UTC m=+798.540093039" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.580311 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-dns-svc\") pod \"fc08105b-c173-411b-973a-02b4d771b928\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.580589 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mbvs\" (UniqueName: \"kubernetes.io/projected/fc08105b-c173-411b-973a-02b4d771b928-kube-api-access-6mbvs\") pod \"fc08105b-c173-411b-973a-02b4d771b928\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.580690 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-config\") pod \"fc08105b-c173-411b-973a-02b4d771b928\" (UID: \"fc08105b-c173-411b-973a-02b4d771b928\") " Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.580940 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc08105b-c173-411b-973a-02b4d771b928" (UID: "fc08105b-c173-411b-973a-02b4d771b928"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.581281 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-config" (OuterVolumeSpecName: "config") pod "fc08105b-c173-411b-973a-02b4d771b928" (UID: "fc08105b-c173-411b-973a-02b4d771b928"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.585993 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc08105b-c173-411b-973a-02b4d771b928-kube-api-access-6mbvs" (OuterVolumeSpecName: "kube-api-access-6mbvs") pod "fc08105b-c173-411b-973a-02b4d771b928" (UID: "fc08105b-c173-411b-973a-02b4d771b928"). InnerVolumeSpecName "kube-api-access-6mbvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.682852 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.682881 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc08105b-c173-411b-973a-02b4d771b928-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.682893 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mbvs\" (UniqueName: \"kubernetes.io/projected/fc08105b-c173-411b-973a-02b4d771b928-kube-api-access-6mbvs\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.931659 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8455895f-75wk5"] Nov 23 06:57:41 crc kubenswrapper[4681]: I1123 06:57:41.978702 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.394563 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"060e8340-b39a-4aec-9d9a-e6b8dc616c8b","Type":"ContainerStarted","Data":"1f734d4d7de1d073a5eefd531219a26323f7642289a9d2d5abba6c19052d35c1"} Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.396769 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" event={"ID":"fc08105b-c173-411b-973a-02b4d771b928","Type":"ContainerDied","Data":"6f0fed82cc0e9e3bf19ba22fc6adbafb7b0fd4765bd40b53371241a03c15a3bf"} Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.396781 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f47fdfb89-9n662" Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.398272 4681 generic.go:334] "Generic (PLEG): container finished" podID="2d0735b8-78b7-4885-b636-728b94aa282f" containerID="a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d" exitCode=0 Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.398451 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" event={"ID":"2d0735b8-78b7-4885-b636-728b94aa282f","Type":"ContainerDied","Data":"a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d"} Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.398554 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" event={"ID":"2d0735b8-78b7-4885-b636-728b94aa282f","Type":"ContainerStarted","Data":"a09dc911ce7526d263b5bbd822ed2ebec53a6e9e988e491de297021ea23e2f86"} Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.439555 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9ns6l" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="registry-server" probeResult="failure" output=< Nov 23 06:57:42 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 06:57:42 crc kubenswrapper[4681]: > Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.515944 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f47fdfb89-9n662"] Nov 23 06:57:42 crc kubenswrapper[4681]: I1123 06:57:42.530852 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f47fdfb89-9n662"] Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.165588 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.202982 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.260968 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc08105b-c173-411b-973a-02b4d771b928" path="/var/lib/kubelet/pods/fc08105b-c173-411b-973a-02b4d771b928/volumes" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.267707 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.268662 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.299011 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.432744 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" event={"ID":"2d0735b8-78b7-4885-b636-728b94aa282f","Type":"ContainerStarted","Data":"5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b"} Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.451359 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" podStartSLOduration=3.451344744 podStartE2EDuration="3.451344744s" podCreationTimestamp="2025-11-23 06:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:57:43.44942304 +0000 UTC m=+800.518932278" watchObservedRunningTime="2025-11-23 06:57:43.451344744 +0000 UTC m=+800.520853981" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.475501 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.535628 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bcv8z"] Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.537379 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.556534 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcv8z"] Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.741944 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjkwl\" (UniqueName: \"kubernetes.io/projected/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-kube-api-access-zjkwl\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.741996 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-utilities\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.742942 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-catalog-content\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.844585 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjkwl\" (UniqueName: \"kubernetes.io/projected/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-kube-api-access-zjkwl\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.844626 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-utilities\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.844707 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-catalog-content\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.845275 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-catalog-content\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.846323 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-utilities\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:43 crc kubenswrapper[4681]: I1123 06:57:43.879344 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjkwl\" (UniqueName: \"kubernetes.io/projected/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-kube-api-access-zjkwl\") pod \"community-operators-bcv8z\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.159556 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.445609 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.481010 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.609371 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.610841 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.614948 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.615033 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.615128 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.615036 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2h5f7" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.621047 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.653164 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcv8z"] Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.776346 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.776802 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.776882 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.777050 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-scripts\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.777075 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-config\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.777170 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.777293 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndkx\" (UniqueName: \"kubernetes.io/projected/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-kube-api-access-gndkx\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.879490 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.879597 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.879677 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.879814 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-scripts\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.879840 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-config\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.879893 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.880004 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gndkx\" (UniqueName: \"kubernetes.io/projected/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-kube-api-access-gndkx\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.880580 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.881338 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-scripts\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.881355 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-config\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.894751 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.897723 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndkx\" (UniqueName: \"kubernetes.io/projected/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-kube-api-access-gndkx\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.899535 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.902875 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7fcc162-370b-4e84-9b15-b3632c7c2fdb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7fcc162-370b-4e84-9b15-b3632c7c2fdb\") " pod="openstack/ovn-northd-0" Nov 23 06:57:44 crc kubenswrapper[4681]: I1123 06:57:44.929786 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 06:57:45 crc kubenswrapper[4681]: I1123 06:57:45.361763 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 06:57:45 crc kubenswrapper[4681]: I1123 06:57:45.452357 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7fcc162-370b-4e84-9b15-b3632c7c2fdb","Type":"ContainerStarted","Data":"635f23eee9f557d8553bc3f5c1a9a310f13d0bd68db06bddce8e40ee410b6800"} Nov 23 06:57:45 crc kubenswrapper[4681]: I1123 06:57:45.454718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcv8z" event={"ID":"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1","Type":"ContainerStarted","Data":"1957a0b6cb2c0bfa21bfb90abffe1d48af646b92a6c24e89428c87451674c32b"} Nov 23 06:57:45 crc kubenswrapper[4681]: I1123 06:57:45.833816 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:46 crc kubenswrapper[4681]: I1123 06:57:46.500354 4681 generic.go:334] "Generic (PLEG): container finished" podID="060e8340-b39a-4aec-9d9a-e6b8dc616c8b" containerID="1f734d4d7de1d073a5eefd531219a26323f7642289a9d2d5abba6c19052d35c1" exitCode=0 Nov 23 06:57:46 crc kubenswrapper[4681]: I1123 06:57:46.500554 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"060e8340-b39a-4aec-9d9a-e6b8dc616c8b","Type":"ContainerDied","Data":"1f734d4d7de1d073a5eefd531219a26323f7642289a9d2d5abba6c19052d35c1"} Nov 23 06:57:46 crc kubenswrapper[4681]: I1123 06:57:46.648527 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:46 crc kubenswrapper[4681]: I1123 06:57:46.649605 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:46 crc kubenswrapper[4681]: I1123 06:57:46.813556 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:47 crc kubenswrapper[4681]: I1123 06:57:47.514741 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"060e8340-b39a-4aec-9d9a-e6b8dc616c8b","Type":"ContainerStarted","Data":"cbf8d30b7cc2f1ca4f0fe0f515d509e6d525fb04b5efa5f277ec3b5f658476de"} Nov 23 06:57:47 crc kubenswrapper[4681]: I1123 06:57:47.518289 4681 generic.go:334] "Generic (PLEG): container finished" podID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerID="aa15dfbaf721e2428c997f1833761d7bb288aeba5587596e8f63cd65aa44fcdf" exitCode=0 Nov 23 06:57:47 crc kubenswrapper[4681]: I1123 06:57:47.518641 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcv8z" event={"ID":"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1","Type":"ContainerDied","Data":"aa15dfbaf721e2428c997f1833761d7bb288aeba5587596e8f63cd65aa44fcdf"} Nov 23 06:57:47 crc kubenswrapper[4681]: I1123 06:57:47.537710 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371992.31709 podStartE2EDuration="44.537687329s" podCreationTimestamp="2025-11-23 06:57:03 +0000 UTC" firstStartedPulling="2025-11-23 06:57:05.870327333 +0000 UTC m=+762.939836569" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:57:47.536787542 +0000 UTC m=+804.606296779" watchObservedRunningTime="2025-11-23 06:57:47.537687329 +0000 UTC m=+804.607196566" Nov 23 06:57:47 crc kubenswrapper[4681]: I1123 06:57:47.667724 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.529978 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6e2ff794-284c-406f-a815-9efec112c044","Type":"ContainerStarted","Data":"64896ff51779c881bb9362fcb20885bfa0830579b3c1525ff8fb8d8cb254da13"} Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.533279 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcv8z" event={"ID":"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1","Type":"ContainerStarted","Data":"dde4d8ca8ce198d42da5cc68a21a1cdb06490d8d78b4c6dab266e1e18c291b79"} Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.542159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7fcc162-370b-4e84-9b15-b3632c7c2fdb","Type":"ContainerStarted","Data":"d3ddb513196b0ae853b5316ac368d4f738fa3af1c65d7fda4be71b6190d31927"} Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.542192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7fcc162-370b-4e84-9b15-b3632c7c2fdb","Type":"ContainerStarted","Data":"2a8bec12e45b666d78c5ad037cfb9dc5c5dc6e985f4a908823e4cd0bc124cede"} Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.542254 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.543921 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e93be3c-dcb6-4105-868c-645d5c8c7bd0","Type":"ContainerStarted","Data":"26d05d10cbbc451df6804f6cc6bf5b505854f245655b61d41a993b45c5b09f20"} Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.624114 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.926103436 podStartE2EDuration="4.624091235s" podCreationTimestamp="2025-11-23 06:57:44 +0000 UTC" firstStartedPulling="2025-11-23 06:57:45.352186187 +0000 UTC m=+802.421695423" lastFinishedPulling="2025-11-23 06:57:48.050173985 +0000 UTC m=+805.119683222" observedRunningTime="2025-11-23 06:57:48.621660558 +0000 UTC m=+805.691169794" watchObservedRunningTime="2025-11-23 06:57:48.624091235 +0000 UTC m=+805.693600461" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.743299 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.796788 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8455895f-75wk5"] Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.796980 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" podUID="2d0735b8-78b7-4885-b636-728b94aa282f" containerName="dnsmasq-dns" containerID="cri-o://5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b" gracePeriod=10 Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.798568 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.814878 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dfd8c6765-5kmzt"] Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.816123 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.825533 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dfd8c6765-5kmzt"] Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.885557 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.885631 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-dns-svc\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.885702 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-config\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.885763 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.885803 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzttm\" (UniqueName: \"kubernetes.io/projected/71e0935b-e717-4e96-ae02-fb6bcb85bae5-kube-api-access-dzttm\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.986934 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-config\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.987296 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.987335 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzttm\" (UniqueName: \"kubernetes.io/projected/71e0935b-e717-4e96-ae02-fb6bcb85bae5-kube-api-access-dzttm\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.987475 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.987536 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-dns-svc\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.987871 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-config\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.988218 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.988430 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:48 crc kubenswrapper[4681]: I1123 06:57:48.988696 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-dns-svc\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.020530 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzttm\" (UniqueName: \"kubernetes.io/projected/71e0935b-e717-4e96-ae02-fb6bcb85bae5-kube-api-access-dzttm\") pod \"dnsmasq-dns-7dfd8c6765-5kmzt\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.135318 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.546048 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.555317 4681 generic.go:334] "Generic (PLEG): container finished" podID="2d0735b8-78b7-4885-b636-728b94aa282f" containerID="5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b" exitCode=0 Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.555367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" event={"ID":"2d0735b8-78b7-4885-b636-728b94aa282f","Type":"ContainerDied","Data":"5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b"} Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.555392 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" event={"ID":"2d0735b8-78b7-4885-b636-728b94aa282f","Type":"ContainerDied","Data":"a09dc911ce7526d263b5bbd822ed2ebec53a6e9e988e491de297021ea23e2f86"} Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.555409 4681 scope.go:117] "RemoveContainer" containerID="5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.555506 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8455895f-75wk5" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.559295 4681 generic.go:334] "Generic (PLEG): container finished" podID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerID="dde4d8ca8ce198d42da5cc68a21a1cdb06490d8d78b4c6dab266e1e18c291b79" exitCode=0 Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.559574 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcv8z" event={"ID":"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1","Type":"ContainerDied","Data":"dde4d8ca8ce198d42da5cc68a21a1cdb06490d8d78b4c6dab266e1e18c291b79"} Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.599493 4681 scope.go:117] "RemoveContainer" containerID="a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.624617 4681 scope.go:117] "RemoveContainer" containerID="5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b" Nov 23 06:57:49 crc kubenswrapper[4681]: E1123 06:57:49.633578 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b\": container with ID starting with 5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b not found: ID does not exist" containerID="5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.633632 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b"} err="failed to get container status \"5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b\": rpc error: code = NotFound desc = could not find container \"5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b\": container with ID starting with 5b3a72a02f157293febe0b9bcf7cdb9cbaf814b0fc03d47b239b595dc5ce028b not found: ID does not exist" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.633659 4681 scope.go:117] "RemoveContainer" containerID="a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d" Nov 23 06:57:49 crc kubenswrapper[4681]: E1123 06:57:49.637592 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d\": container with ID starting with a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d not found: ID does not exist" containerID="a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.637636 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d"} err="failed to get container status \"a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d\": rpc error: code = NotFound desc = could not find container \"a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d\": container with ID starting with a7b4cda55d9f409dbf15c535735ebeb55adfa91b120aac3a66b7d41d45c8089d not found: ID does not exist" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.708269 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5bfs\" (UniqueName: \"kubernetes.io/projected/2d0735b8-78b7-4885-b636-728b94aa282f-kube-api-access-d5bfs\") pod \"2d0735b8-78b7-4885-b636-728b94aa282f\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.708298 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-dns-svc\") pod \"2d0735b8-78b7-4885-b636-728b94aa282f\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.708335 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-config\") pod \"2d0735b8-78b7-4885-b636-728b94aa282f\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.708447 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-sb\") pod \"2d0735b8-78b7-4885-b636-728b94aa282f\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.708539 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-nb\") pod \"2d0735b8-78b7-4885-b636-728b94aa282f\" (UID: \"2d0735b8-78b7-4885-b636-728b94aa282f\") " Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.750778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d0735b8-78b7-4885-b636-728b94aa282f-kube-api-access-d5bfs" (OuterVolumeSpecName: "kube-api-access-d5bfs") pod "2d0735b8-78b7-4885-b636-728b94aa282f" (UID: "2d0735b8-78b7-4885-b636-728b94aa282f"). InnerVolumeSpecName "kube-api-access-d5bfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.753908 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d0735b8-78b7-4885-b636-728b94aa282f" (UID: "2d0735b8-78b7-4885-b636-728b94aa282f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.758108 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d0735b8-78b7-4885-b636-728b94aa282f" (UID: "2d0735b8-78b7-4885-b636-728b94aa282f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.777278 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-config" (OuterVolumeSpecName: "config") pod "2d0735b8-78b7-4885-b636-728b94aa282f" (UID: "2d0735b8-78b7-4885-b636-728b94aa282f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.780064 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d0735b8-78b7-4885-b636-728b94aa282f" (UID: "2d0735b8-78b7-4885-b636-728b94aa282f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.813028 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.813057 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.813068 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5bfs\" (UniqueName: \"kubernetes.io/projected/2d0735b8-78b7-4885-b636-728b94aa282f-kube-api-access-d5bfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.813081 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.813090 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0735b8-78b7-4885-b636-728b94aa282f-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.885473 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8455895f-75wk5"] Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.902133 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b8455895f-75wk5"] Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.916538 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dfd8c6765-5kmzt"] Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.930619 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.930652 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.971642 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 23 06:57:49 crc kubenswrapper[4681]: E1123 06:57:49.972063 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0735b8-78b7-4885-b636-728b94aa282f" containerName="init" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.972084 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0735b8-78b7-4885-b636-728b94aa282f" containerName="init" Nov 23 06:57:49 crc kubenswrapper[4681]: E1123 06:57:49.972105 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0735b8-78b7-4885-b636-728b94aa282f" containerName="dnsmasq-dns" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.972112 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0735b8-78b7-4885-b636-728b94aa282f" containerName="dnsmasq-dns" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.972313 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0735b8-78b7-4885-b636-728b94aa282f" containerName="dnsmasq-dns" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.976722 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.976959 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.981233 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.981407 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.982083 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 23 06:57:49 crc kubenswrapper[4681]: I1123 06:57:49.982212 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-c99jf" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.021520 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.123106 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqwf\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-kube-api-access-nqqwf\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.123371 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.123456 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-lock\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.123773 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.123848 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-cache\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.226412 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqwf\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-kube-api-access-nqqwf\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.226473 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.226542 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-lock\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.226669 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.226709 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-cache\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.227190 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-cache\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.227803 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: E1123 06:57:50.228589 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 06:57:50 crc kubenswrapper[4681]: E1123 06:57:50.228613 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 06:57:50 crc kubenswrapper[4681]: E1123 06:57:50.228665 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift podName:a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3 nodeName:}" failed. No retries permitted until 2025-11-23 06:57:50.728649003 +0000 UTC m=+807.798158240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift") pod "swift-storage-0" (UID: "a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3") : configmap "swift-ring-files" not found Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.228778 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-lock\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.249555 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.257912 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqwf\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-kube-api-access-nqqwf\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.569714 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" event={"ID":"71e0935b-e717-4e96-ae02-fb6bcb85bae5","Type":"ContainerStarted","Data":"10e7fab16dc9034de1c0e893529aad3e4f725fc17d9a2753bb10d396011ee7ff"} Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.612490 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:50 crc kubenswrapper[4681]: I1123 06:57:50.737075 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:50 crc kubenswrapper[4681]: E1123 06:57:50.737441 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 06:57:50 crc kubenswrapper[4681]: E1123 06:57:50.737486 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 06:57:50 crc kubenswrapper[4681]: E1123 06:57:50.737723 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift podName:a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3 nodeName:}" failed. No retries permitted until 2025-11-23 06:57:51.737694368 +0000 UTC m=+808.807203595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift") pod "swift-storage-0" (UID: "a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3") : configmap "swift-ring-files" not found Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.274116 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d0735b8-78b7-4885-b636-728b94aa282f" path="/var/lib/kubelet/pods/2d0735b8-78b7-4885-b636-728b94aa282f/volumes" Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.388866 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.430749 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.580121 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcv8z" event={"ID":"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1","Type":"ContainerStarted","Data":"7a1172c89432ff2564b12deae6bd04561468bbbf8561d39451bfefc8840bb2fc"} Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.581996 4681 generic.go:334] "Generic (PLEG): container finished" podID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerID="2f26d9c0bf47228b1160e9514be0c5bb88e90e0ed12d162c4c7f5b4f7eece67a" exitCode=0 Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.583034 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" event={"ID":"71e0935b-e717-4e96-ae02-fb6bcb85bae5","Type":"ContainerDied","Data":"2f26d9c0bf47228b1160e9514be0c5bb88e90e0ed12d162c4c7f5b4f7eece67a"} Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.612782 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bcv8z" podStartSLOduration=4.803933382 podStartE2EDuration="8.612748621s" podCreationTimestamp="2025-11-23 06:57:43 +0000 UTC" firstStartedPulling="2025-11-23 06:57:47.520584641 +0000 UTC m=+804.590093868" lastFinishedPulling="2025-11-23 06:57:51.32939988 +0000 UTC m=+808.398909107" observedRunningTime="2025-11-23 06:57:51.606245177 +0000 UTC m=+808.675754414" watchObservedRunningTime="2025-11-23 06:57:51.612748621 +0000 UTC m=+808.682257858" Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.755788 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:51 crc kubenswrapper[4681]: E1123 06:57:51.756025 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 06:57:51 crc kubenswrapper[4681]: E1123 06:57:51.756053 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 06:57:51 crc kubenswrapper[4681]: E1123 06:57:51.756105 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift podName:a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3 nodeName:}" failed. No retries permitted until 2025-11-23 06:57:53.756086712 +0000 UTC m=+810.825595948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift") pod "swift-storage-0" (UID: "a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3") : configmap "swift-ring-files" not found Nov 23 06:57:51 crc kubenswrapper[4681]: I1123 06:57:51.933922 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxjhh"] Nov 23 06:57:52 crc kubenswrapper[4681]: I1123 06:57:52.591652 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pxjhh" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="registry-server" containerID="cri-o://d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a" gracePeriod=2 Nov 23 06:57:52 crc kubenswrapper[4681]: I1123 06:57:52.592899 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" event={"ID":"71e0935b-e717-4e96-ae02-fb6bcb85bae5","Type":"ContainerStarted","Data":"54b063d5687b2b79c2d3555edf1df8d56054061b0344cad12fc8e7d08350a575"} Nov 23 06:57:52 crc kubenswrapper[4681]: I1123 06:57:52.592936 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:52 crc kubenswrapper[4681]: I1123 06:57:52.607589 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" podStartSLOduration=4.6075786359999995 podStartE2EDuration="4.607578636s" podCreationTimestamp="2025-11-23 06:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:57:52.60658924 +0000 UTC m=+809.676098477" watchObservedRunningTime="2025-11-23 06:57:52.607578636 +0000 UTC m=+809.677087862" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.048965 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.189188 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rc2c\" (UniqueName: \"kubernetes.io/projected/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-kube-api-access-6rc2c\") pod \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.189334 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-utilities\") pod \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.189485 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-catalog-content\") pod \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\" (UID: \"06c9baf9-fa51-4d38-a5ce-15bc36e7e610\") " Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.190649 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-utilities" (OuterVolumeSpecName: "utilities") pod "06c9baf9-fa51-4d38-a5ce-15bc36e7e610" (UID: "06c9baf9-fa51-4d38-a5ce-15bc36e7e610"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.204201 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-kube-api-access-6rc2c" (OuterVolumeSpecName: "kube-api-access-6rc2c") pod "06c9baf9-fa51-4d38-a5ce-15bc36e7e610" (UID: "06c9baf9-fa51-4d38-a5ce-15bc36e7e610"). InnerVolumeSpecName "kube-api-access-6rc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.246548 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06c9baf9-fa51-4d38-a5ce-15bc36e7e610" (UID: "06c9baf9-fa51-4d38-a5ce-15bc36e7e610"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.294489 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.294522 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.294535 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rc2c\" (UniqueName: \"kubernetes.io/projected/06c9baf9-fa51-4d38-a5ce-15bc36e7e610-kube-api-access-6rc2c\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.603629 4681 generic.go:334] "Generic (PLEG): container finished" podID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerID="d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a" exitCode=0 Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.603778 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxjhh" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.603763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxjhh" event={"ID":"06c9baf9-fa51-4d38-a5ce-15bc36e7e610","Type":"ContainerDied","Data":"d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a"} Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.603861 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxjhh" event={"ID":"06c9baf9-fa51-4d38-a5ce-15bc36e7e610","Type":"ContainerDied","Data":"b90cd5ffefc05de3963e706c10fee9f92a94c776b4396e6ec68bbf60c4b773c3"} Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.603901 4681 scope.go:117] "RemoveContainer" containerID="d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.634675 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxjhh"] Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.640431 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pxjhh"] Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.641681 4681 scope.go:117] "RemoveContainer" containerID="21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.687621 4681 scope.go:117] "RemoveContainer" containerID="18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.713078 4681 scope.go:117] "RemoveContainer" containerID="d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a" Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.713576 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a\": container with ID starting with d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a not found: ID does not exist" containerID="d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.713617 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a"} err="failed to get container status \"d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a\": rpc error: code = NotFound desc = could not find container \"d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a\": container with ID starting with d109bda7f044356e20f8c3e499e0bd87436935f903d1824a3a2e28daa402ae1a not found: ID does not exist" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.713643 4681 scope.go:117] "RemoveContainer" containerID="21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2" Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.713989 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2\": container with ID starting with 21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2 not found: ID does not exist" containerID="21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.714016 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2"} err="failed to get container status \"21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2\": rpc error: code = NotFound desc = could not find container \"21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2\": container with ID starting with 21b04072853726a60212f452c9746f913bff32b270c52448280429400c4c55a2 not found: ID does not exist" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.714033 4681 scope.go:117] "RemoveContainer" containerID="18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437" Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.714333 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437\": container with ID starting with 18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437 not found: ID does not exist" containerID="18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.714374 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437"} err="failed to get container status \"18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437\": rpc error: code = NotFound desc = could not find container \"18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437\": container with ID starting with 18011c13d1e4276ea1b1341512cb3baf8f3c0fb77dfe2bbab4751401a4ee7437 not found: ID does not exist" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.743075 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ns6l"] Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.743531 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9ns6l" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="registry-server" containerID="cri-o://537433c72bbbd44217b9899e24938bf175fa1d4cada3ddfe20f271b36eba6df1" gracePeriod=2 Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.807738 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.807936 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.807960 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.808028 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift podName:a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3 nodeName:}" failed. No retries permitted until 2025-11-23 06:57:57.808006788 +0000 UTC m=+814.877516025 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift") pod "swift-storage-0" (UID: "a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3") : configmap "swift-ring-files" not found Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.848250 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rmth5"] Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.848638 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="extract-content" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.848661 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="extract-content" Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.848673 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="extract-utilities" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.848679 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="extract-utilities" Nov 23 06:57:53 crc kubenswrapper[4681]: E1123 06:57:53.848709 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="registry-server" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.848715 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="registry-server" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.848878 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" containerName="registry-server" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.849440 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.851197 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.851200 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.852343 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.863726 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rmth5"] Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.909992 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-scripts\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.910028 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/037378b8-4f2b-4513-b4b3-c7f97aae12a9-etc-swift\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.910056 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-swiftconf\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.910145 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-combined-ca-bundle\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.910215 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qt9t\" (UniqueName: \"kubernetes.io/projected/037378b8-4f2b-4513-b4b3-c7f97aae12a9-kube-api-access-2qt9t\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.910320 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-ring-data-devices\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:53 crc kubenswrapper[4681]: I1123 06:57:53.910374 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-dispersionconf\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.011572 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qt9t\" (UniqueName: \"kubernetes.io/projected/037378b8-4f2b-4513-b4b3-c7f97aae12a9-kube-api-access-2qt9t\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.011690 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-ring-data-devices\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.011741 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-dispersionconf\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.011802 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-scripts\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.011828 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/037378b8-4f2b-4513-b4b3-c7f97aae12a9-etc-swift\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.011868 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-swiftconf\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.011946 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-combined-ca-bundle\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.012633 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/037378b8-4f2b-4513-b4b3-c7f97aae12a9-etc-swift\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.012800 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-ring-data-devices\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.012825 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-scripts\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.016634 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-swiftconf\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.016719 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-combined-ca-bundle\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.017663 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-dispersionconf\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.025999 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qt9t\" (UniqueName: \"kubernetes.io/projected/037378b8-4f2b-4513-b4b3-c7f97aae12a9-kube-api-access-2qt9t\") pod \"swift-ring-rebalance-rmth5\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.160784 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.160862 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.189013 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.213803 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:57:54 crc kubenswrapper[4681]: W1123 06:57:54.614423 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod037378b8_4f2b_4513_b4b3_c7f97aae12a9.slice/crio-8e843895be7bda8924acb39fde68d5b3b99e083da428814d6455eb5c22547828 WatchSource:0}: Error finding container 8e843895be7bda8924acb39fde68d5b3b99e083da428814d6455eb5c22547828: Status 404 returned error can't find the container with id 8e843895be7bda8924acb39fde68d5b3b99e083da428814d6455eb5c22547828 Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.614936 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rmth5"] Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.615566 4681 generic.go:334] "Generic (PLEG): container finished" podID="71234289-c188-4210-959c-41708f14cc66" containerID="537433c72bbbd44217b9899e24938bf175fa1d4cada3ddfe20f271b36eba6df1" exitCode=0 Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.615624 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ns6l" event={"ID":"71234289-c188-4210-959c-41708f14cc66","Type":"ContainerDied","Data":"537433c72bbbd44217b9899e24938bf175fa1d4cada3ddfe20f271b36eba6df1"} Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.651239 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.826376 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-utilities\") pod \"71234289-c188-4210-959c-41708f14cc66\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.827074 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-utilities" (OuterVolumeSpecName: "utilities") pod "71234289-c188-4210-959c-41708f14cc66" (UID: "71234289-c188-4210-959c-41708f14cc66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.827243 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn47x\" (UniqueName: \"kubernetes.io/projected/71234289-c188-4210-959c-41708f14cc66-kube-api-access-mn47x\") pod \"71234289-c188-4210-959c-41708f14cc66\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.827768 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-catalog-content\") pod \"71234289-c188-4210-959c-41708f14cc66\" (UID: \"71234289-c188-4210-959c-41708f14cc66\") " Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.828630 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.833070 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71234289-c188-4210-959c-41708f14cc66-kube-api-access-mn47x" (OuterVolumeSpecName: "kube-api-access-mn47x") pod "71234289-c188-4210-959c-41708f14cc66" (UID: "71234289-c188-4210-959c-41708f14cc66"). InnerVolumeSpecName "kube-api-access-mn47x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.843324 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71234289-c188-4210-959c-41708f14cc66" (UID: "71234289-c188-4210-959c-41708f14cc66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.929312 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71234289-c188-4210-959c-41708f14cc66-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:54 crc kubenswrapper[4681]: I1123 06:57:54.929721 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn47x\" (UniqueName: \"kubernetes.io/projected/71234289-c188-4210-959c-41708f14cc66-kube-api-access-mn47x\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.258900 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c9baf9-fa51-4d38-a5ce-15bc36e7e610" path="/var/lib/kubelet/pods/06c9baf9-fa51-4d38-a5ce-15bc36e7e610/volumes" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.282934 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.282988 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.421717 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.628979 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ns6l" event={"ID":"71234289-c188-4210-959c-41708f14cc66","Type":"ContainerDied","Data":"e9124d54e29030402a1fc463df3e39e00d400fc6849cce6d01bba514fa49e602"} Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.629020 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ns6l" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.629044 4681 scope.go:117] "RemoveContainer" containerID="537433c72bbbd44217b9899e24938bf175fa1d4cada3ddfe20f271b36eba6df1" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.631132 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rmth5" event={"ID":"037378b8-4f2b-4513-b4b3-c7f97aae12a9","Type":"ContainerStarted","Data":"8e843895be7bda8924acb39fde68d5b3b99e083da428814d6455eb5c22547828"} Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.656563 4681 scope.go:117] "RemoveContainer" containerID="0f6045961fd7a79a2f8b4d80df32015e8edf71b5a2e8bff24fedf33cd91d210e" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.660597 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ns6l"] Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.667094 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ns6l"] Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.680071 4681 scope.go:117] "RemoveContainer" containerID="eababc0403b9ea48068195f13e4a54f50bbac1012477be6c1757da52e6103a73" Nov 23 06:57:55 crc kubenswrapper[4681]: I1123 06:57:55.727233 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.475880 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c109-account-create-wnfks"] Nov 23 06:57:56 crc kubenswrapper[4681]: E1123 06:57:56.476542 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="extract-utilities" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.476560 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="extract-utilities" Nov 23 06:57:56 crc kubenswrapper[4681]: E1123 06:57:56.476589 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="extract-content" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.476596 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="extract-content" Nov 23 06:57:56 crc kubenswrapper[4681]: E1123 06:57:56.476606 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="registry-server" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.476613 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="registry-server" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.476777 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="71234289-c188-4210-959c-41708f14cc66" containerName="registry-server" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.477383 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.479593 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.490195 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c109-account-create-wnfks"] Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.536070 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gbcw6"] Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.537650 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.542694 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gbcw6"] Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.563935 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kl4t\" (UniqueName: \"kubernetes.io/projected/763112b5-b200-4987-8b3a-e9b9fa181621-kube-api-access-5kl4t\") pod \"keystone-c109-account-create-wnfks\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.563969 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763112b5-b200-4987-8b3a-e9b9fa181621-operator-scripts\") pod \"keystone-c109-account-create-wnfks\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.665995 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kl4t\" (UniqueName: \"kubernetes.io/projected/763112b5-b200-4987-8b3a-e9b9fa181621-kube-api-access-5kl4t\") pod \"keystone-c109-account-create-wnfks\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.666060 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763112b5-b200-4987-8b3a-e9b9fa181621-operator-scripts\") pod \"keystone-c109-account-create-wnfks\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.666092 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34af6366-6fcc-451b-b6fc-72eb0af39eb1-operator-scripts\") pod \"keystone-db-create-gbcw6\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.666228 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc4pm\" (UniqueName: \"kubernetes.io/projected/34af6366-6fcc-451b-b6fc-72eb0af39eb1-kube-api-access-cc4pm\") pod \"keystone-db-create-gbcw6\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.666772 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763112b5-b200-4987-8b3a-e9b9fa181621-operator-scripts\") pod \"keystone-c109-account-create-wnfks\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.682870 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kl4t\" (UniqueName: \"kubernetes.io/projected/763112b5-b200-4987-8b3a-e9b9fa181621-kube-api-access-5kl4t\") pod \"keystone-c109-account-create-wnfks\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.768122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34af6366-6fcc-451b-b6fc-72eb0af39eb1-operator-scripts\") pod \"keystone-db-create-gbcw6\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.768496 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc4pm\" (UniqueName: \"kubernetes.io/projected/34af6366-6fcc-451b-b6fc-72eb0af39eb1-kube-api-access-cc4pm\") pod \"keystone-db-create-gbcw6\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.770233 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34af6366-6fcc-451b-b6fc-72eb0af39eb1-operator-scripts\") pod \"keystone-db-create-gbcw6\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.786362 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc4pm\" (UniqueName: \"kubernetes.io/projected/34af6366-6fcc-451b-b6fc-72eb0af39eb1-kube-api-access-cc4pm\") pod \"keystone-db-create-gbcw6\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.792405 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.834925 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5c8nq"] Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.836236 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.851003 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.854575 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5c8nq"] Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.949321 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d5d6-account-create-sb6mm"] Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.950711 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.954840 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.963159 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d5d6-account-create-sb6mm"] Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.977988 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q4gb\" (UniqueName: \"kubernetes.io/projected/0da07e75-f9da-4b3e-8941-3aac63809525-kube-api-access-5q4gb\") pod \"placement-db-create-5c8nq\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:56 crc kubenswrapper[4681]: I1123 06:57:56.978254 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da07e75-f9da-4b3e-8941-3aac63809525-operator-scripts\") pod \"placement-db-create-5c8nq\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.080800 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4gb\" (UniqueName: \"kubernetes.io/projected/0da07e75-f9da-4b3e-8941-3aac63809525-kube-api-access-5q4gb\") pod \"placement-db-create-5c8nq\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.081352 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7rsp\" (UniqueName: \"kubernetes.io/projected/14c5884e-2bbd-45f5-9363-6f504638a689-kube-api-access-w7rsp\") pod \"placement-d5d6-account-create-sb6mm\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.081403 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c5884e-2bbd-45f5-9363-6f504638a689-operator-scripts\") pod \"placement-d5d6-account-create-sb6mm\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.081533 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da07e75-f9da-4b3e-8941-3aac63809525-operator-scripts\") pod \"placement-db-create-5c8nq\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.082412 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da07e75-f9da-4b3e-8941-3aac63809525-operator-scripts\") pod \"placement-db-create-5c8nq\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.102624 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q4gb\" (UniqueName: \"kubernetes.io/projected/0da07e75-f9da-4b3e-8941-3aac63809525-kube-api-access-5q4gb\") pod \"placement-db-create-5c8nq\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.181756 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5c8nq" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.184103 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7rsp\" (UniqueName: \"kubernetes.io/projected/14c5884e-2bbd-45f5-9363-6f504638a689-kube-api-access-w7rsp\") pod \"placement-d5d6-account-create-sb6mm\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.184153 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c5884e-2bbd-45f5-9363-6f504638a689-operator-scripts\") pod \"placement-d5d6-account-create-sb6mm\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.185361 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c5884e-2bbd-45f5-9363-6f504638a689-operator-scripts\") pod \"placement-d5d6-account-create-sb6mm\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.197905 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gbcw6"] Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.204336 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7rsp\" (UniqueName: \"kubernetes.io/projected/14c5884e-2bbd-45f5-9363-6f504638a689-kube-api-access-w7rsp\") pod \"placement-d5d6-account-create-sb6mm\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:57 crc kubenswrapper[4681]: W1123 06:57:57.216109 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34af6366_6fcc_451b_b6fc_72eb0af39eb1.slice/crio-bfd8488c2381ba9e45fb020225163aa0cb0c1effb1d32664fd5b62b069f3d2fa WatchSource:0}: Error finding container bfd8488c2381ba9e45fb020225163aa0cb0c1effb1d32664fd5b62b069f3d2fa: Status 404 returned error can't find the container with id bfd8488c2381ba9e45fb020225163aa0cb0c1effb1d32664fd5b62b069f3d2fa Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.264285 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71234289-c188-4210-959c-41708f14cc66" path="/var/lib/kubelet/pods/71234289-c188-4210-959c-41708f14cc66/volumes" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.288779 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c109-account-create-wnfks"] Nov 23 06:57:57 crc kubenswrapper[4681]: W1123 06:57:57.295369 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod763112b5_b200_4987_8b3a_e9b9fa181621.slice/crio-b1b5b05fd102b7dac164c44fcda2fa26ad070942cd20282a38a1b0f8eaeccf51 WatchSource:0}: Error finding container b1b5b05fd102b7dac164c44fcda2fa26ad070942cd20282a38a1b0f8eaeccf51: Status 404 returned error can't find the container with id b1b5b05fd102b7dac164c44fcda2fa26ad070942cd20282a38a1b0f8eaeccf51 Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.315588 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.668985 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5c8nq"] Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.716399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c109-account-create-wnfks" event={"ID":"763112b5-b200-4987-8b3a-e9b9fa181621","Type":"ContainerStarted","Data":"08b4e9d8a59d86e7879502f0579478e31e10fd63b9d8c7f0526c5c3feeeb58fc"} Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.716471 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c109-account-create-wnfks" event={"ID":"763112b5-b200-4987-8b3a-e9b9fa181621","Type":"ContainerStarted","Data":"b1b5b05fd102b7dac164c44fcda2fa26ad070942cd20282a38a1b0f8eaeccf51"} Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.718438 4681 generic.go:334] "Generic (PLEG): container finished" podID="34af6366-6fcc-451b-b6fc-72eb0af39eb1" containerID="72bb3843124270b5be3e8addc8c2749529cdc6f137a1a2c0fc6a40c23e8688e9" exitCode=0 Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.718487 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gbcw6" event={"ID":"34af6366-6fcc-451b-b6fc-72eb0af39eb1","Type":"ContainerDied","Data":"72bb3843124270b5be3e8addc8c2749529cdc6f137a1a2c0fc6a40c23e8688e9"} Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.718506 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gbcw6" event={"ID":"34af6366-6fcc-451b-b6fc-72eb0af39eb1","Type":"ContainerStarted","Data":"bfd8488c2381ba9e45fb020225163aa0cb0c1effb1d32664fd5b62b069f3d2fa"} Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.736220 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c109-account-create-wnfks" podStartSLOduration=1.7361878389999998 podStartE2EDuration="1.736187839s" podCreationTimestamp="2025-11-23 06:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:57:57.73157003 +0000 UTC m=+814.801079267" watchObservedRunningTime="2025-11-23 06:57:57.736187839 +0000 UTC m=+814.805697076" Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.810290 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:57:57 crc kubenswrapper[4681]: E1123 06:57:57.811357 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 06:57:57 crc kubenswrapper[4681]: E1123 06:57:57.811387 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 06:57:57 crc kubenswrapper[4681]: E1123 06:57:57.811445 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift podName:a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3 nodeName:}" failed. No retries permitted until 2025-11-23 06:58:05.811426122 +0000 UTC m=+822.880935358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift") pod "swift-storage-0" (UID: "a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3") : configmap "swift-ring-files" not found Nov 23 06:57:57 crc kubenswrapper[4681]: I1123 06:57:57.821112 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d5d6-account-create-sb6mm"] Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.730093 4681 generic.go:334] "Generic (PLEG): container finished" podID="763112b5-b200-4987-8b3a-e9b9fa181621" containerID="08b4e9d8a59d86e7879502f0579478e31e10fd63b9d8c7f0526c5c3feeeb58fc" exitCode=0 Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.730554 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c109-account-create-wnfks" event={"ID":"763112b5-b200-4987-8b3a-e9b9fa181621","Type":"ContainerDied","Data":"08b4e9d8a59d86e7879502f0579478e31e10fd63b9d8c7f0526c5c3feeeb58fc"} Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.738770 4681 generic.go:334] "Generic (PLEG): container finished" podID="14c5884e-2bbd-45f5-9363-6f504638a689" containerID="07735bb8a267dd15321e42a2d39df29fb6e4ff1e884ea5f990453ba418c95609" exitCode=0 Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.738872 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d5d6-account-create-sb6mm" event={"ID":"14c5884e-2bbd-45f5-9363-6f504638a689","Type":"ContainerDied","Data":"07735bb8a267dd15321e42a2d39df29fb6e4ff1e884ea5f990453ba418c95609"} Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.738959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d5d6-account-create-sb6mm" event={"ID":"14c5884e-2bbd-45f5-9363-6f504638a689","Type":"ContainerStarted","Data":"7886b2dcfcebe6fbec1a63e2f7f559506d10610fd4f271f6390ac791b4b11315"} Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.740622 4681 generic.go:334] "Generic (PLEG): container finished" podID="0da07e75-f9da-4b3e-8941-3aac63809525" containerID="659514459c9af17082ca87133002fc6715c96fea9e7bcb8777dc16582edc712c" exitCode=0 Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.740885 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5c8nq" event={"ID":"0da07e75-f9da-4b3e-8941-3aac63809525","Type":"ContainerDied","Data":"659514459c9af17082ca87133002fc6715c96fea9e7bcb8777dc16582edc712c"} Nov 23 06:57:58 crc kubenswrapper[4681]: I1123 06:57:58.740926 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5c8nq" event={"ID":"0da07e75-f9da-4b3e-8941-3aac63809525","Type":"ContainerStarted","Data":"9f4475b7f212434ea4422b3d9933ae976ecf4fa5c62b9ed086ab0b5d45e05e57"} Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.115154 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.136653 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.215430 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84d4c64565-zpxxw"] Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.215716 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" podUID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerName="dnsmasq-dns" containerID="cri-o://16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f" gracePeriod=10 Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.267025 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34af6366-6fcc-451b-b6fc-72eb0af39eb1-operator-scripts\") pod \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.267075 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc4pm\" (UniqueName: \"kubernetes.io/projected/34af6366-6fcc-451b-b6fc-72eb0af39eb1-kube-api-access-cc4pm\") pod \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\" (UID: \"34af6366-6fcc-451b-b6fc-72eb0af39eb1\") " Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.267999 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34af6366-6fcc-451b-b6fc-72eb0af39eb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34af6366-6fcc-451b-b6fc-72eb0af39eb1" (UID: "34af6366-6fcc-451b-b6fc-72eb0af39eb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.269205 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34af6366-6fcc-451b-b6fc-72eb0af39eb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.297804 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34af6366-6fcc-451b-b6fc-72eb0af39eb1-kube-api-access-cc4pm" (OuterVolumeSpecName: "kube-api-access-cc4pm") pod "34af6366-6fcc-451b-b6fc-72eb0af39eb1" (UID: "34af6366-6fcc-451b-b6fc-72eb0af39eb1"). InnerVolumeSpecName "kube-api-access-cc4pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.371072 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc4pm\" (UniqueName: \"kubernetes.io/projected/34af6366-6fcc-451b-b6fc-72eb0af39eb1-kube-api-access-cc4pm\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.728138 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.764837 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gbcw6" event={"ID":"34af6366-6fcc-451b-b6fc-72eb0af39eb1","Type":"ContainerDied","Data":"bfd8488c2381ba9e45fb020225163aa0cb0c1effb1d32664fd5b62b069f3d2fa"} Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.764897 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfd8488c2381ba9e45fb020225163aa0cb0c1effb1d32664fd5b62b069f3d2fa" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.764988 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbcw6" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.769425 4681 generic.go:334] "Generic (PLEG): container finished" podID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerID="16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.769740 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.769995 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" event={"ID":"33139b07-8e6f-45bd-b1d3-e1c16ac57d43","Type":"ContainerDied","Data":"16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f"} Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.770047 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d4c64565-zpxxw" event={"ID":"33139b07-8e6f-45bd-b1d3-e1c16ac57d43","Type":"ContainerDied","Data":"965b9997a893460afe45e851cd436c1b0a2a83b1d7f0b3bb5b4e8b1490891535"} Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.770070 4681 scope.go:117] "RemoveContainer" containerID="16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.854145 4681 scope.go:117] "RemoveContainer" containerID="b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.881434 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-config\") pod \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.881816 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-ovsdbserver-nb\") pod \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.881976 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kf7g\" (UniqueName: \"kubernetes.io/projected/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-kube-api-access-7kf7g\") pod \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.882043 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-dns-svc\") pod \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\" (UID: \"33139b07-8e6f-45bd-b1d3-e1c16ac57d43\") " Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.899165 4681 scope.go:117] "RemoveContainer" containerID="16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.900281 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-kube-api-access-7kf7g" (OuterVolumeSpecName: "kube-api-access-7kf7g") pod "33139b07-8e6f-45bd-b1d3-e1c16ac57d43" (UID: "33139b07-8e6f-45bd-b1d3-e1c16ac57d43"). InnerVolumeSpecName "kube-api-access-7kf7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4681]: E1123 06:57:59.900389 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f\": container with ID starting with 16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f not found: ID does not exist" containerID="16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.900418 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f"} err="failed to get container status \"16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f\": rpc error: code = NotFound desc = could not find container \"16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f\": container with ID starting with 16ce953120ecbca9ab6ed03451f6b3c4b1e53109aa857b1e353c252bb15d9d0f not found: ID does not exist" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.900471 4681 scope.go:117] "RemoveContainer" containerID="b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0" Nov 23 06:57:59 crc kubenswrapper[4681]: E1123 06:57:59.902232 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0\": container with ID starting with b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0 not found: ID does not exist" containerID="b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.902267 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0"} err="failed to get container status \"b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0\": rpc error: code = NotFound desc = could not find container \"b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0\": container with ID starting with b75ed116e1565dd4e5907b099348f417ae9db5ac6925464ad86aefda9f2a0df0 not found: ID does not exist" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.949562 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33139b07-8e6f-45bd-b1d3-e1c16ac57d43" (UID: "33139b07-8e6f-45bd-b1d3-e1c16ac57d43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.949710 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-config" (OuterVolumeSpecName: "config") pod "33139b07-8e6f-45bd-b1d3-e1c16ac57d43" (UID: "33139b07-8e6f-45bd-b1d3-e1c16ac57d43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.957401 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33139b07-8e6f-45bd-b1d3-e1c16ac57d43" (UID: "33139b07-8e6f-45bd-b1d3-e1c16ac57d43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.984419 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.984450 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.984480 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kf7g\" (UniqueName: \"kubernetes.io/projected/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-kube-api-access-7kf7g\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.984491 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33139b07-8e6f-45bd-b1d3-e1c16ac57d43-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4681]: I1123 06:57:59.996334 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.143010 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84d4c64565-zpxxw"] Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.166034 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5c8nq" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.200587 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84d4c64565-zpxxw"] Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.297365 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q4gb\" (UniqueName: \"kubernetes.io/projected/0da07e75-f9da-4b3e-8941-3aac63809525-kube-api-access-5q4gb\") pod \"0da07e75-f9da-4b3e-8941-3aac63809525\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.297501 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da07e75-f9da-4b3e-8941-3aac63809525-operator-scripts\") pod \"0da07e75-f9da-4b3e-8941-3aac63809525\" (UID: \"0da07e75-f9da-4b3e-8941-3aac63809525\") " Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.298679 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da07e75-f9da-4b3e-8941-3aac63809525-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0da07e75-f9da-4b3e-8941-3aac63809525" (UID: "0da07e75-f9da-4b3e-8941-3aac63809525"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.301810 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da07e75-f9da-4b3e-8941-3aac63809525-kube-api-access-5q4gb" (OuterVolumeSpecName: "kube-api-access-5q4gb") pod "0da07e75-f9da-4b3e-8941-3aac63809525" (UID: "0da07e75-f9da-4b3e-8941-3aac63809525"). InnerVolumeSpecName "kube-api-access-5q4gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.361375 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.373017 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.403030 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q4gb\" (UniqueName: \"kubernetes.io/projected/0da07e75-f9da-4b3e-8941-3aac63809525-kube-api-access-5q4gb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.403059 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da07e75-f9da-4b3e-8941-3aac63809525-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.503895 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c5884e-2bbd-45f5-9363-6f504638a689-operator-scripts\") pod \"14c5884e-2bbd-45f5-9363-6f504638a689\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.504036 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kl4t\" (UniqueName: \"kubernetes.io/projected/763112b5-b200-4987-8b3a-e9b9fa181621-kube-api-access-5kl4t\") pod \"763112b5-b200-4987-8b3a-e9b9fa181621\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.504153 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763112b5-b200-4987-8b3a-e9b9fa181621-operator-scripts\") pod \"763112b5-b200-4987-8b3a-e9b9fa181621\" (UID: \"763112b5-b200-4987-8b3a-e9b9fa181621\") " Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.504289 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7rsp\" (UniqueName: \"kubernetes.io/projected/14c5884e-2bbd-45f5-9363-6f504638a689-kube-api-access-w7rsp\") pod \"14c5884e-2bbd-45f5-9363-6f504638a689\" (UID: \"14c5884e-2bbd-45f5-9363-6f504638a689\") " Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.504483 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c5884e-2bbd-45f5-9363-6f504638a689-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14c5884e-2bbd-45f5-9363-6f504638a689" (UID: "14c5884e-2bbd-45f5-9363-6f504638a689"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.504734 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/763112b5-b200-4987-8b3a-e9b9fa181621-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "763112b5-b200-4987-8b3a-e9b9fa181621" (UID: "763112b5-b200-4987-8b3a-e9b9fa181621"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.504912 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/763112b5-b200-4987-8b3a-e9b9fa181621-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.504928 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c5884e-2bbd-45f5-9363-6f504638a689-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.507610 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c5884e-2bbd-45f5-9363-6f504638a689-kube-api-access-w7rsp" (OuterVolumeSpecName: "kube-api-access-w7rsp") pod "14c5884e-2bbd-45f5-9363-6f504638a689" (UID: "14c5884e-2bbd-45f5-9363-6f504638a689"). InnerVolumeSpecName "kube-api-access-w7rsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.511907 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763112b5-b200-4987-8b3a-e9b9fa181621-kube-api-access-5kl4t" (OuterVolumeSpecName: "kube-api-access-5kl4t") pod "763112b5-b200-4987-8b3a-e9b9fa181621" (UID: "763112b5-b200-4987-8b3a-e9b9fa181621"). InnerVolumeSpecName "kube-api-access-5kl4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.606992 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7rsp\" (UniqueName: \"kubernetes.io/projected/14c5884e-2bbd-45f5-9363-6f504638a689-kube-api-access-w7rsp\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.607021 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kl4t\" (UniqueName: \"kubernetes.io/projected/763112b5-b200-4987-8b3a-e9b9fa181621-kube-api-access-5kl4t\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.781623 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c109-account-create-wnfks" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.781597 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c109-account-create-wnfks" event={"ID":"763112b5-b200-4987-8b3a-e9b9fa181621","Type":"ContainerDied","Data":"b1b5b05fd102b7dac164c44fcda2fa26ad070942cd20282a38a1b0f8eaeccf51"} Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.781757 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b5b05fd102b7dac164c44fcda2fa26ad070942cd20282a38a1b0f8eaeccf51" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.786649 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d5d6-account-create-sb6mm" event={"ID":"14c5884e-2bbd-45f5-9363-6f504638a689","Type":"ContainerDied","Data":"7886b2dcfcebe6fbec1a63e2f7f559506d10610fd4f271f6390ac791b4b11315"} Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.786712 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7886b2dcfcebe6fbec1a63e2f7f559506d10610fd4f271f6390ac791b4b11315" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.786680 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5d6-account-create-sb6mm" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.789200 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5c8nq" event={"ID":"0da07e75-f9da-4b3e-8941-3aac63809525","Type":"ContainerDied","Data":"9f4475b7f212434ea4422b3d9933ae976ecf4fa5c62b9ed086ab0b5d45e05e57"} Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.789252 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5c8nq" Nov 23 06:58:00 crc kubenswrapper[4681]: I1123 06:58:00.789272 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f4475b7f212434ea4422b3d9933ae976ecf4fa5c62b9ed086ab0b5d45e05e57" Nov 23 06:58:01 crc kubenswrapper[4681]: I1123 06:58:01.261727 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" path="/var/lib/kubelet/pods/33139b07-8e6f-45bd-b1d3-e1c16ac57d43/volumes" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.162510 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6jtms"] Nov 23 06:58:02 crc kubenswrapper[4681]: E1123 06:58:02.163657 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerName="dnsmasq-dns" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.163675 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerName="dnsmasq-dns" Nov 23 06:58:02 crc kubenswrapper[4681]: E1123 06:58:02.163691 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34af6366-6fcc-451b-b6fc-72eb0af39eb1" containerName="mariadb-database-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.163696 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="34af6366-6fcc-451b-b6fc-72eb0af39eb1" containerName="mariadb-database-create" Nov 23 06:58:02 crc kubenswrapper[4681]: E1123 06:58:02.163738 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerName="init" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.163745 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerName="init" Nov 23 06:58:02 crc kubenswrapper[4681]: E1123 06:58:02.163760 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da07e75-f9da-4b3e-8941-3aac63809525" containerName="mariadb-database-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.163766 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da07e75-f9da-4b3e-8941-3aac63809525" containerName="mariadb-database-create" Nov 23 06:58:02 crc kubenswrapper[4681]: E1123 06:58:02.163779 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c5884e-2bbd-45f5-9363-6f504638a689" containerName="mariadb-account-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.163806 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c5884e-2bbd-45f5-9363-6f504638a689" containerName="mariadb-account-create" Nov 23 06:58:02 crc kubenswrapper[4681]: E1123 06:58:02.163824 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763112b5-b200-4987-8b3a-e9b9fa181621" containerName="mariadb-account-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.163828 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="763112b5-b200-4987-8b3a-e9b9fa181621" containerName="mariadb-account-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.164062 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="33139b07-8e6f-45bd-b1d3-e1c16ac57d43" containerName="dnsmasq-dns" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.164252 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="763112b5-b200-4987-8b3a-e9b9fa181621" containerName="mariadb-account-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.164260 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da07e75-f9da-4b3e-8941-3aac63809525" containerName="mariadb-database-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.164277 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="34af6366-6fcc-451b-b6fc-72eb0af39eb1" containerName="mariadb-database-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.164284 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c5884e-2bbd-45f5-9363-6f504638a689" containerName="mariadb-account-create" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.164998 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.174631 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6jtms"] Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.252812 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fedf9828-c3be-4342-ab30-d743456c894c-operator-scripts\") pod \"glance-db-create-6jtms\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.252959 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfjhg\" (UniqueName: \"kubernetes.io/projected/fedf9828-c3be-4342-ab30-d743456c894c-kube-api-access-gfjhg\") pod \"glance-db-create-6jtms\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.264794 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ed2a-account-create-pnwh7"] Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.266086 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.268039 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.279687 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ed2a-account-create-pnwh7"] Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.355108 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fedf9828-c3be-4342-ab30-d743456c894c-operator-scripts\") pod \"glance-db-create-6jtms\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.355207 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0759e7b7-255b-4d6a-9822-52d3967752a4-operator-scripts\") pod \"glance-ed2a-account-create-pnwh7\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.355245 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfjhg\" (UniqueName: \"kubernetes.io/projected/fedf9828-c3be-4342-ab30-d743456c894c-kube-api-access-gfjhg\") pod \"glance-db-create-6jtms\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.355370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9j2t\" (UniqueName: \"kubernetes.io/projected/0759e7b7-255b-4d6a-9822-52d3967752a4-kube-api-access-w9j2t\") pod \"glance-ed2a-account-create-pnwh7\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.356970 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fedf9828-c3be-4342-ab30-d743456c894c-operator-scripts\") pod \"glance-db-create-6jtms\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.386383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfjhg\" (UniqueName: \"kubernetes.io/projected/fedf9828-c3be-4342-ab30-d743456c894c-kube-api-access-gfjhg\") pod \"glance-db-create-6jtms\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.456952 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0759e7b7-255b-4d6a-9822-52d3967752a4-operator-scripts\") pod \"glance-ed2a-account-create-pnwh7\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.457123 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9j2t\" (UniqueName: \"kubernetes.io/projected/0759e7b7-255b-4d6a-9822-52d3967752a4-kube-api-access-w9j2t\") pod \"glance-ed2a-account-create-pnwh7\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.457753 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0759e7b7-255b-4d6a-9822-52d3967752a4-operator-scripts\") pod \"glance-ed2a-account-create-pnwh7\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.480068 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9j2t\" (UniqueName: \"kubernetes.io/projected/0759e7b7-255b-4d6a-9822-52d3967752a4-kube-api-access-w9j2t\") pod \"glance-ed2a-account-create-pnwh7\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.490756 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6jtms" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.551151 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pbdxh"] Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.556749 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.588005 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.598774 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbdxh"] Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.660918 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-utilities\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.661119 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-catalog-content\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.661150 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7897\" (UniqueName: \"kubernetes.io/projected/5ac86f95-c79b-40bf-82be-a4e91bd44539-kube-api-access-g7897\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.762120 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-catalog-content\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.762161 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7897\" (UniqueName: \"kubernetes.io/projected/5ac86f95-c79b-40bf-82be-a4e91bd44539-kube-api-access-g7897\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.762213 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-utilities\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.762636 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-utilities\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.762858 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-catalog-content\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.783691 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7897\" (UniqueName: \"kubernetes.io/projected/5ac86f95-c79b-40bf-82be-a4e91bd44539-kube-api-access-g7897\") pod \"redhat-operators-pbdxh\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.814168 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rmth5" event={"ID":"037378b8-4f2b-4513-b4b3-c7f97aae12a9","Type":"ContainerStarted","Data":"ff61b7415840d2d63128a8539723378ba2f05eb0607864ba5e4cc0248b0e8b86"} Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.831270 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rmth5" podStartSLOduration=1.9102799 podStartE2EDuration="9.831242297s" podCreationTimestamp="2025-11-23 06:57:53 +0000 UTC" firstStartedPulling="2025-11-23 06:57:54.61696998 +0000 UTC m=+811.686479218" lastFinishedPulling="2025-11-23 06:58:02.537932378 +0000 UTC m=+819.607441615" observedRunningTime="2025-11-23 06:58:02.829152368 +0000 UTC m=+819.898661605" watchObservedRunningTime="2025-11-23 06:58:02.831242297 +0000 UTC m=+819.900751534" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.944566 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:02 crc kubenswrapper[4681]: I1123 06:58:02.969202 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6jtms"] Nov 23 06:58:02 crc kubenswrapper[4681]: W1123 06:58:02.989147 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfedf9828_c3be_4342_ab30_d743456c894c.slice/crio-2dde81ed9e7e0bc46a82026b8d64b986feae4be6b7c3aab540cf22277d966ff9 WatchSource:0}: Error finding container 2dde81ed9e7e0bc46a82026b8d64b986feae4be6b7c3aab540cf22277d966ff9: Status 404 returned error can't find the container with id 2dde81ed9e7e0bc46a82026b8d64b986feae4be6b7c3aab540cf22277d966ff9 Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.053349 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ed2a-account-create-pnwh7"] Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.447701 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbdxh"] Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.849389 4681 generic.go:334] "Generic (PLEG): container finished" podID="fedf9828-c3be-4342-ab30-d743456c894c" containerID="c62a5c5225b917d346e522ff70487041f2519ef01081b144ee1c48c7162a32e2" exitCode=0 Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.850197 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6jtms" event={"ID":"fedf9828-c3be-4342-ab30-d743456c894c","Type":"ContainerDied","Data":"c62a5c5225b917d346e522ff70487041f2519ef01081b144ee1c48c7162a32e2"} Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.850322 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6jtms" event={"ID":"fedf9828-c3be-4342-ab30-d743456c894c","Type":"ContainerStarted","Data":"2dde81ed9e7e0bc46a82026b8d64b986feae4be6b7c3aab540cf22277d966ff9"} Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.856582 4681 generic.go:334] "Generic (PLEG): container finished" podID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerID="49b564b6bc98927767f15d273197f29da5e25c714f62256f12949c6452a53590" exitCode=0 Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.857043 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbdxh" event={"ID":"5ac86f95-c79b-40bf-82be-a4e91bd44539","Type":"ContainerDied","Data":"49b564b6bc98927767f15d273197f29da5e25c714f62256f12949c6452a53590"} Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.857100 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbdxh" event={"ID":"5ac86f95-c79b-40bf-82be-a4e91bd44539","Type":"ContainerStarted","Data":"38f4d1588b8d96fcc790380799bd9b9c885a88dcf35423d11ac2082f68d03096"} Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.868583 4681 generic.go:334] "Generic (PLEG): container finished" podID="0759e7b7-255b-4d6a-9822-52d3967752a4" containerID="c61d31dd6eb69493d91d41d6f74ae19b9dbeed22f7778fba0bfaa161e7de26a9" exitCode=0 Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.869021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed2a-account-create-pnwh7" event={"ID":"0759e7b7-255b-4d6a-9822-52d3967752a4","Type":"ContainerDied","Data":"c61d31dd6eb69493d91d41d6f74ae19b9dbeed22f7778fba0bfaa161e7de26a9"} Nov 23 06:58:03 crc kubenswrapper[4681]: I1123 06:58:03.869065 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed2a-account-create-pnwh7" event={"ID":"0759e7b7-255b-4d6a-9822-52d3967752a4","Type":"ContainerStarted","Data":"05f293a528fe9eee5830648a0c09903a7cd21f3add18adf108a464fdac4cb32e"} Nov 23 06:58:04 crc kubenswrapper[4681]: I1123 06:58:04.202100 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:58:04 crc kubenswrapper[4681]: I1123 06:58:04.880276 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbdxh" event={"ID":"5ac86f95-c79b-40bf-82be-a4e91bd44539","Type":"ContainerStarted","Data":"e9aecf4d0c22afa9c63c9273f57bada72e4b4b0f1da5bf40cc617f86a6f355ab"} Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.248205 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.322279 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6jtms" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.413115 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0759e7b7-255b-4d6a-9822-52d3967752a4-operator-scripts\") pod \"0759e7b7-255b-4d6a-9822-52d3967752a4\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.413203 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9j2t\" (UniqueName: \"kubernetes.io/projected/0759e7b7-255b-4d6a-9822-52d3967752a4-kube-api-access-w9j2t\") pod \"0759e7b7-255b-4d6a-9822-52d3967752a4\" (UID: \"0759e7b7-255b-4d6a-9822-52d3967752a4\") " Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.414256 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0759e7b7-255b-4d6a-9822-52d3967752a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0759e7b7-255b-4d6a-9822-52d3967752a4" (UID: "0759e7b7-255b-4d6a-9822-52d3967752a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.422063 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0759e7b7-255b-4d6a-9822-52d3967752a4-kube-api-access-w9j2t" (OuterVolumeSpecName: "kube-api-access-w9j2t") pod "0759e7b7-255b-4d6a-9822-52d3967752a4" (UID: "0759e7b7-255b-4d6a-9822-52d3967752a4"). InnerVolumeSpecName "kube-api-access-w9j2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.515324 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfjhg\" (UniqueName: \"kubernetes.io/projected/fedf9828-c3be-4342-ab30-d743456c894c-kube-api-access-gfjhg\") pod \"fedf9828-c3be-4342-ab30-d743456c894c\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.515501 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fedf9828-c3be-4342-ab30-d743456c894c-operator-scripts\") pod \"fedf9828-c3be-4342-ab30-d743456c894c\" (UID: \"fedf9828-c3be-4342-ab30-d743456c894c\") " Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.515911 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedf9828-c3be-4342-ab30-d743456c894c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fedf9828-c3be-4342-ab30-d743456c894c" (UID: "fedf9828-c3be-4342-ab30-d743456c894c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.516895 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fedf9828-c3be-4342-ab30-d743456c894c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.516987 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0759e7b7-255b-4d6a-9822-52d3967752a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.517006 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9j2t\" (UniqueName: \"kubernetes.io/projected/0759e7b7-255b-4d6a-9822-52d3967752a4-kube-api-access-w9j2t\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.518891 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedf9828-c3be-4342-ab30-d743456c894c-kube-api-access-gfjhg" (OuterVolumeSpecName: "kube-api-access-gfjhg") pod "fedf9828-c3be-4342-ab30-d743456c894c" (UID: "fedf9828-c3be-4342-ab30-d743456c894c"). InnerVolumeSpecName "kube-api-access-gfjhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.618370 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfjhg\" (UniqueName: \"kubernetes.io/projected/fedf9828-c3be-4342-ab30-d743456c894c-kube-api-access-gfjhg\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.821780 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:58:05 crc kubenswrapper[4681]: E1123 06:58:05.821883 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 06:58:05 crc kubenswrapper[4681]: E1123 06:58:05.821918 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 06:58:05 crc kubenswrapper[4681]: E1123 06:58:05.822000 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift podName:a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3 nodeName:}" failed. No retries permitted until 2025-11-23 06:58:21.821976808 +0000 UTC m=+838.891486045 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift") pod "swift-storage-0" (UID: "a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3") : configmap "swift-ring-files" not found Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.892318 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed2a-account-create-pnwh7" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.894259 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed2a-account-create-pnwh7" event={"ID":"0759e7b7-255b-4d6a-9822-52d3967752a4","Type":"ContainerDied","Data":"05f293a528fe9eee5830648a0c09903a7cd21f3add18adf108a464fdac4cb32e"} Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.894831 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f293a528fe9eee5830648a0c09903a7cd21f3add18adf108a464fdac4cb32e" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.896256 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6jtms" Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.901445 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6jtms" event={"ID":"fedf9828-c3be-4342-ab30-d743456c894c","Type":"ContainerDied","Data":"2dde81ed9e7e0bc46a82026b8d64b986feae4be6b7c3aab540cf22277d966ff9"} Nov 23 06:58:05 crc kubenswrapper[4681]: I1123 06:58:05.901678 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dde81ed9e7e0bc46a82026b8d64b986feae4be6b7c3aab540cf22277d966ff9" Nov 23 06:58:06 crc kubenswrapper[4681]: I1123 06:58:06.529420 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcv8z"] Nov 23 06:58:06 crc kubenswrapper[4681]: I1123 06:58:06.529983 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bcv8z" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="registry-server" containerID="cri-o://7a1172c89432ff2564b12deae6bd04561468bbbf8561d39451bfefc8840bb2fc" gracePeriod=2 Nov 23 06:58:06 crc kubenswrapper[4681]: I1123 06:58:06.906571 4681 generic.go:334] "Generic (PLEG): container finished" podID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerID="e9aecf4d0c22afa9c63c9273f57bada72e4b4b0f1da5bf40cc617f86a6f355ab" exitCode=0 Nov 23 06:58:06 crc kubenswrapper[4681]: I1123 06:58:06.906649 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbdxh" event={"ID":"5ac86f95-c79b-40bf-82be-a4e91bd44539","Type":"ContainerDied","Data":"e9aecf4d0c22afa9c63c9273f57bada72e4b4b0f1da5bf40cc617f86a6f355ab"} Nov 23 06:58:06 crc kubenswrapper[4681]: I1123 06:58:06.916298 4681 generic.go:334] "Generic (PLEG): container finished" podID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerID="7a1172c89432ff2564b12deae6bd04561468bbbf8561d39451bfefc8840bb2fc" exitCode=0 Nov 23 06:58:06 crc kubenswrapper[4681]: I1123 06:58:06.916347 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcv8z" event={"ID":"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1","Type":"ContainerDied","Data":"7a1172c89432ff2564b12deae6bd04561468bbbf8561d39451bfefc8840bb2fc"} Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.408636 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-glbxl"] Nov 23 06:58:07 crc kubenswrapper[4681]: E1123 06:58:07.409025 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0759e7b7-255b-4d6a-9822-52d3967752a4" containerName="mariadb-account-create" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.409045 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0759e7b7-255b-4d6a-9822-52d3967752a4" containerName="mariadb-account-create" Nov 23 06:58:07 crc kubenswrapper[4681]: E1123 06:58:07.409074 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fedf9828-c3be-4342-ab30-d743456c894c" containerName="mariadb-database-create" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.409080 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fedf9828-c3be-4342-ab30-d743456c894c" containerName="mariadb-database-create" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.409285 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0759e7b7-255b-4d6a-9822-52d3967752a4" containerName="mariadb-account-create" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.409311 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="fedf9828-c3be-4342-ab30-d743456c894c" containerName="mariadb-database-create" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.409965 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.412355 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.412631 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r52k4" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.436496 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-glbxl"] Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.452707 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-config-data\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.452739 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745k2\" (UniqueName: \"kubernetes.io/projected/60d0f758-c36c-459d-90ac-326fbf9faa1c-kube-api-access-745k2\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.452761 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-combined-ca-bundle\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.452780 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-db-sync-config-data\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.461895 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n28qz" podUID="cfabf028-28e8-48fa-9536-a0e02622dc92" containerName="ovn-controller" probeResult="failure" output=< Nov 23 06:58:07 crc kubenswrapper[4681]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 06:58:07 crc kubenswrapper[4681]: > Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.478053 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.481766 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.511112 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xhmlv" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.556153 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-utilities\") pod \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.556305 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-catalog-content\") pod \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.556493 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjkwl\" (UniqueName: \"kubernetes.io/projected/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-kube-api-access-zjkwl\") pod \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\" (UID: \"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1\") " Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.556802 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-utilities" (OuterVolumeSpecName: "utilities") pod "e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" (UID: "e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.557306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-config-data\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.557335 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-combined-ca-bundle\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.557353 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745k2\" (UniqueName: \"kubernetes.io/projected/60d0f758-c36c-459d-90ac-326fbf9faa1c-kube-api-access-745k2\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.557382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-db-sync-config-data\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.557542 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.567885 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-config-data\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.569491 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-kube-api-access-zjkwl" (OuterVolumeSpecName: "kube-api-access-zjkwl") pod "e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" (UID: "e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1"). InnerVolumeSpecName "kube-api-access-zjkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.574535 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-db-sync-config-data\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.577277 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-combined-ca-bundle\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.578358 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745k2\" (UniqueName: \"kubernetes.io/projected/60d0f758-c36c-459d-90ac-326fbf9faa1c-kube-api-access-745k2\") pod \"glance-db-sync-glbxl\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.619652 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" (UID: "e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.659177 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.659256 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjkwl\" (UniqueName: \"kubernetes.io/projected/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1-kube-api-access-zjkwl\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.714377 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n28qz-config-ggdpg"] Nov 23 06:58:07 crc kubenswrapper[4681]: E1123 06:58:07.714758 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="extract-content" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.714777 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="extract-content" Nov 23 06:58:07 crc kubenswrapper[4681]: E1123 06:58:07.714795 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="extract-utilities" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.714802 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="extract-utilities" Nov 23 06:58:07 crc kubenswrapper[4681]: E1123 06:58:07.714810 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="registry-server" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.714816 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="registry-server" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.714957 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" containerName="registry-server" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.715505 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.717834 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.728366 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz-config-ggdpg"] Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.762955 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run-ovn\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.762999 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-additional-scripts\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.763155 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qmkd\" (UniqueName: \"kubernetes.io/projected/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-kube-api-access-6qmkd\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.763282 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-scripts\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.763498 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.763521 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-log-ovn\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.785558 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.864528 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run-ovn\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.864975 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-additional-scripts\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.864759 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run-ovn\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.865124 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qmkd\" (UniqueName: \"kubernetes.io/projected/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-kube-api-access-6qmkd\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.865228 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-scripts\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.865334 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.865400 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-log-ovn\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.865570 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-log-ovn\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.865756 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-additional-scripts\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.865992 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.867836 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-scripts\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.882592 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qmkd\" (UniqueName: \"kubernetes.io/projected/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-kube-api-access-6qmkd\") pod \"ovn-controller-n28qz-config-ggdpg\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.927690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbdxh" event={"ID":"5ac86f95-c79b-40bf-82be-a4e91bd44539","Type":"ContainerStarted","Data":"a16fa0424bdcfa241cbd77d23d9fdd240d2dc522417905caa1f0c169d583c30c"} Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.935449 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcv8z" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.936103 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcv8z" event={"ID":"e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1","Type":"ContainerDied","Data":"1957a0b6cb2c0bfa21bfb90abffe1d48af646b92a6c24e89428c87451674c32b"} Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.936136 4681 scope.go:117] "RemoveContainer" containerID="7a1172c89432ff2564b12deae6bd04561468bbbf8561d39451bfefc8840bb2fc" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.959541 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pbdxh" podStartSLOduration=2.282670976 podStartE2EDuration="5.959530347s" podCreationTimestamp="2025-11-23 06:58:02 +0000 UTC" firstStartedPulling="2025-11-23 06:58:03.860101305 +0000 UTC m=+820.929610542" lastFinishedPulling="2025-11-23 06:58:07.536960676 +0000 UTC m=+824.606469913" observedRunningTime="2025-11-23 06:58:07.958360173 +0000 UTC m=+825.027869411" watchObservedRunningTime="2025-11-23 06:58:07.959530347 +0000 UTC m=+825.029039584" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.970406 4681 scope.go:117] "RemoveContainer" containerID="dde4d8ca8ce198d42da5cc68a21a1cdb06490d8d78b4c6dab266e1e18c291b79" Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.987525 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcv8z"] Nov 23 06:58:07 crc kubenswrapper[4681]: I1123 06:58:07.992516 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bcv8z"] Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.016208 4681 scope.go:117] "RemoveContainer" containerID="aa15dfbaf721e2428c997f1833761d7bb288aeba5587596e8f63cd65aa44fcdf" Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.034945 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.322200 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-glbxl"] Nov 23 06:58:08 crc kubenswrapper[4681]: W1123 06:58:08.329694 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60d0f758_c36c_459d_90ac_326fbf9faa1c.slice/crio-eb8fde526801484a45923f12e09de005b90608ff7b585b316d034ce9b2bcfb91 WatchSource:0}: Error finding container eb8fde526801484a45923f12e09de005b90608ff7b585b316d034ce9b2bcfb91: Status 404 returned error can't find the container with id eb8fde526801484a45923f12e09de005b90608ff7b585b316d034ce9b2bcfb91 Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.461323 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz-config-ggdpg"] Nov 23 06:58:08 crc kubenswrapper[4681]: W1123 06:58:08.470174 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b8c5d8f_cd8a_449e_9854_9d31058a3e98.slice/crio-be88b128b6222170bd1818161719773be4282d1762bb6bba99319b3b878c4459 WatchSource:0}: Error finding container be88b128b6222170bd1818161719773be4282d1762bb6bba99319b3b878c4459: Status 404 returned error can't find the container with id be88b128b6222170bd1818161719773be4282d1762bb6bba99319b3b878c4459 Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.944860 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-ggdpg" event={"ID":"9b8c5d8f-cd8a-449e-9854-9d31058a3e98","Type":"ContainerStarted","Data":"de548e50d76397ca3e8d6a0475aa03ce4c0604a2cfb9926b39faba6906bdeaa9"} Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.945195 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-ggdpg" event={"ID":"9b8c5d8f-cd8a-449e-9854-9d31058a3e98","Type":"ContainerStarted","Data":"be88b128b6222170bd1818161719773be4282d1762bb6bba99319b3b878c4459"} Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.947762 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-glbxl" event={"ID":"60d0f758-c36c-459d-90ac-326fbf9faa1c","Type":"ContainerStarted","Data":"eb8fde526801484a45923f12e09de005b90608ff7b585b316d034ce9b2bcfb91"} Nov 23 06:58:08 crc kubenswrapper[4681]: I1123 06:58:08.970825 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n28qz-config-ggdpg" podStartSLOduration=1.9707998359999999 podStartE2EDuration="1.970799836s" podCreationTimestamp="2025-11-23 06:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:08.961237156 +0000 UTC m=+826.030746393" watchObservedRunningTime="2025-11-23 06:58:08.970799836 +0000 UTC m=+826.040309072" Nov 23 06:58:09 crc kubenswrapper[4681]: I1123 06:58:09.260711 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1" path="/var/lib/kubelet/pods/e6fd8c10-b9d5-4fad-b5da-7eaafaaabca1/volumes" Nov 23 06:58:09 crc kubenswrapper[4681]: I1123 06:58:09.964559 4681 generic.go:334] "Generic (PLEG): container finished" podID="037378b8-4f2b-4513-b4b3-c7f97aae12a9" containerID="ff61b7415840d2d63128a8539723378ba2f05eb0607864ba5e4cc0248b0e8b86" exitCode=0 Nov 23 06:58:09 crc kubenswrapper[4681]: I1123 06:58:09.964715 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rmth5" event={"ID":"037378b8-4f2b-4513-b4b3-c7f97aae12a9","Type":"ContainerDied","Data":"ff61b7415840d2d63128a8539723378ba2f05eb0607864ba5e4cc0248b0e8b86"} Nov 23 06:58:09 crc kubenswrapper[4681]: I1123 06:58:09.974026 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b8c5d8f-cd8a-449e-9854-9d31058a3e98" containerID="de548e50d76397ca3e8d6a0475aa03ce4c0604a2cfb9926b39faba6906bdeaa9" exitCode=0 Nov 23 06:58:09 crc kubenswrapper[4681]: I1123 06:58:09.974134 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-ggdpg" event={"ID":"9b8c5d8f-cd8a-449e-9854-9d31058a3e98","Type":"ContainerDied","Data":"de548e50d76397ca3e8d6a0475aa03ce4c0604a2cfb9926b39faba6906bdeaa9"} Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.419666 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.424598 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459070 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-additional-scripts\") pod \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459132 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-combined-ca-bundle\") pod \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459193 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/037378b8-4f2b-4513-b4b3-c7f97aae12a9-etc-swift\") pod \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459380 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run\") pod \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459400 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-ring-data-devices\") pod \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459425 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qt9t\" (UniqueName: \"kubernetes.io/projected/037378b8-4f2b-4513-b4b3-c7f97aae12a9-kube-api-access-2qt9t\") pod \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459454 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-scripts\") pod \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459535 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-dispersionconf\") pod \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459561 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-log-ovn\") pod \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459594 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qmkd\" (UniqueName: \"kubernetes.io/projected/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-kube-api-access-6qmkd\") pod \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459670 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-swiftconf\") pod \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459698 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-scripts\") pod \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\" (UID: \"037378b8-4f2b-4513-b4b3-c7f97aae12a9\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.459723 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run-ovn\") pod \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\" (UID: \"9b8c5d8f-cd8a-449e-9854-9d31058a3e98\") " Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.460180 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "037378b8-4f2b-4513-b4b3-c7f97aae12a9" (UID: "037378b8-4f2b-4513-b4b3-c7f97aae12a9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.460778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037378b8-4f2b-4513-b4b3-c7f97aae12a9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "037378b8-4f2b-4513-b4b3-c7f97aae12a9" (UID: "037378b8-4f2b-4513-b4b3-c7f97aae12a9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.461000 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run" (OuterVolumeSpecName: "var-run") pod "9b8c5d8f-cd8a-449e-9854-9d31058a3e98" (UID: "9b8c5d8f-cd8a-449e-9854-9d31058a3e98"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.461037 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9b8c5d8f-cd8a-449e-9854-9d31058a3e98" (UID: "9b8c5d8f-cd8a-449e-9854-9d31058a3e98"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.461077 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9b8c5d8f-cd8a-449e-9854-9d31058a3e98" (UID: "9b8c5d8f-cd8a-449e-9854-9d31058a3e98"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.462730 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-scripts" (OuterVolumeSpecName: "scripts") pod "9b8c5d8f-cd8a-449e-9854-9d31058a3e98" (UID: "9b8c5d8f-cd8a-449e-9854-9d31058a3e98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.463347 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9b8c5d8f-cd8a-449e-9854-9d31058a3e98" (UID: "9b8c5d8f-cd8a-449e-9854-9d31058a3e98"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.472363 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037378b8-4f2b-4513-b4b3-c7f97aae12a9-kube-api-access-2qt9t" (OuterVolumeSpecName: "kube-api-access-2qt9t") pod "037378b8-4f2b-4513-b4b3-c7f97aae12a9" (UID: "037378b8-4f2b-4513-b4b3-c7f97aae12a9"). InnerVolumeSpecName "kube-api-access-2qt9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.476181 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "037378b8-4f2b-4513-b4b3-c7f97aae12a9" (UID: "037378b8-4f2b-4513-b4b3-c7f97aae12a9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.482791 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-kube-api-access-6qmkd" (OuterVolumeSpecName: "kube-api-access-6qmkd") pod "9b8c5d8f-cd8a-449e-9854-9d31058a3e98" (UID: "9b8c5d8f-cd8a-449e-9854-9d31058a3e98"). InnerVolumeSpecName "kube-api-access-6qmkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.487105 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "037378b8-4f2b-4513-b4b3-c7f97aae12a9" (UID: "037378b8-4f2b-4513-b4b3-c7f97aae12a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.493805 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "037378b8-4f2b-4513-b4b3-c7f97aae12a9" (UID: "037378b8-4f2b-4513-b4b3-c7f97aae12a9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.498253 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-scripts" (OuterVolumeSpecName: "scripts") pod "037378b8-4f2b-4513-b4b3-c7f97aae12a9" (UID: "037378b8-4f2b-4513-b4b3-c7f97aae12a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563000 4681 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563033 4681 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563051 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qt9t\" (UniqueName: \"kubernetes.io/projected/037378b8-4f2b-4513-b4b3-c7f97aae12a9-kube-api-access-2qt9t\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563064 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563073 4681 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563084 4681 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563094 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qmkd\" (UniqueName: \"kubernetes.io/projected/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-kube-api-access-6qmkd\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563104 4681 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563113 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/037378b8-4f2b-4513-b4b3-c7f97aae12a9-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563156 4681 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563180 4681 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9b8c5d8f-cd8a-449e-9854-9d31058a3e98-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563189 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037378b8-4f2b-4513-b4b3-c7f97aae12a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:11 crc kubenswrapper[4681]: I1123 06:58:11.563197 4681 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/037378b8-4f2b-4513-b4b3-c7f97aae12a9-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.015731 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-ggdpg" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.015750 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-ggdpg" event={"ID":"9b8c5d8f-cd8a-449e-9854-9d31058a3e98","Type":"ContainerDied","Data":"be88b128b6222170bd1818161719773be4282d1762bb6bba99319b3b878c4459"} Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.015792 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be88b128b6222170bd1818161719773be4282d1762bb6bba99319b3b878c4459" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.019598 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rmth5" event={"ID":"037378b8-4f2b-4513-b4b3-c7f97aae12a9","Type":"ContainerDied","Data":"8e843895be7bda8924acb39fde68d5b3b99e083da428814d6455eb5c22547828"} Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.019646 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e843895be7bda8924acb39fde68d5b3b99e083da428814d6455eb5c22547828" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.019714 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rmth5" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.081556 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n28qz-config-ggdpg"] Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.087822 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n28qz-config-ggdpg"] Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.129396 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n28qz-config-9xq7b"] Nov 23 06:58:12 crc kubenswrapper[4681]: E1123 06:58:12.129798 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037378b8-4f2b-4513-b4b3-c7f97aae12a9" containerName="swift-ring-rebalance" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.129824 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="037378b8-4f2b-4513-b4b3-c7f97aae12a9" containerName="swift-ring-rebalance" Nov 23 06:58:12 crc kubenswrapper[4681]: E1123 06:58:12.129873 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8c5d8f-cd8a-449e-9854-9d31058a3e98" containerName="ovn-config" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.129879 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8c5d8f-cd8a-449e-9854-9d31058a3e98" containerName="ovn-config" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.130094 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="037378b8-4f2b-4513-b4b3-c7f97aae12a9" containerName="swift-ring-rebalance" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.130119 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8c5d8f-cd8a-449e-9854-9d31058a3e98" containerName="ovn-config" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.130727 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.139333 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz-config-9xq7b"] Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.147007 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.184411 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkbp\" (UniqueName: \"kubernetes.io/projected/73a97872-229d-4e6f-9b8c-1885840e6ada-kube-api-access-6pkbp\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.184478 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.184511 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-additional-scripts\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.184580 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run-ovn\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.184605 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-log-ovn\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.184632 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-scripts\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.286490 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-additional-scripts\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.286594 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run-ovn\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.286625 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-log-ovn\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.286649 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-scripts\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.286729 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pkbp\" (UniqueName: \"kubernetes.io/projected/73a97872-229d-4e6f-9b8c-1885840e6ada-kube-api-access-6pkbp\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.286760 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.287075 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.287812 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-additional-scripts\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.287881 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run-ovn\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.287929 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-log-ovn\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.289883 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-scripts\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.327363 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pkbp\" (UniqueName: \"kubernetes.io/projected/73a97872-229d-4e6f-9b8c-1885840e6ada-kube-api-access-6pkbp\") pod \"ovn-controller-n28qz-config-9xq7b\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.452099 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.563635 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n28qz" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.945734 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.946131 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:12 crc kubenswrapper[4681]: I1123 06:58:12.994747 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz-config-9xq7b"] Nov 23 06:58:13 crc kubenswrapper[4681]: I1123 06:58:13.034189 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-9xq7b" event={"ID":"73a97872-229d-4e6f-9b8c-1885840e6ada","Type":"ContainerStarted","Data":"34be825530fe6a02fad0860e7c4e9baa4403097290bc0c7cb4f716b263bd7398"} Nov 23 06:58:13 crc kubenswrapper[4681]: I1123 06:58:13.263571 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8c5d8f-cd8a-449e-9854-9d31058a3e98" path="/var/lib/kubelet/pods/9b8c5d8f-cd8a-449e-9854-9d31058a3e98/volumes" Nov 23 06:58:14 crc kubenswrapper[4681]: I1123 06:58:14.015689 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pbdxh" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="registry-server" probeResult="failure" output=< Nov 23 06:58:14 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 06:58:14 crc kubenswrapper[4681]: > Nov 23 06:58:14 crc kubenswrapper[4681]: I1123 06:58:14.050760 4681 generic.go:334] "Generic (PLEG): container finished" podID="73a97872-229d-4e6f-9b8c-1885840e6ada" containerID="61b3c1c0fa0f2c7f645dc11fdeef5e738073d33f57f29defa00517202c6e368b" exitCode=0 Nov 23 06:58:14 crc kubenswrapper[4681]: I1123 06:58:14.050843 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-9xq7b" event={"ID":"73a97872-229d-4e6f-9b8c-1885840e6ada","Type":"ContainerDied","Data":"61b3c1c0fa0f2c7f645dc11fdeef5e738073d33f57f29defa00517202c6e368b"} Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.372787 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.557068 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run-ovn\") pod \"73a97872-229d-4e6f-9b8c-1885840e6ada\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.557187 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run\") pod \"73a97872-229d-4e6f-9b8c-1885840e6ada\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.557331 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pkbp\" (UniqueName: \"kubernetes.io/projected/73a97872-229d-4e6f-9b8c-1885840e6ada-kube-api-access-6pkbp\") pod \"73a97872-229d-4e6f-9b8c-1885840e6ada\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.557363 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-scripts\") pod \"73a97872-229d-4e6f-9b8c-1885840e6ada\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.557556 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-additional-scripts\") pod \"73a97872-229d-4e6f-9b8c-1885840e6ada\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.557609 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run" (OuterVolumeSpecName: "var-run") pod "73a97872-229d-4e6f-9b8c-1885840e6ada" (UID: "73a97872-229d-4e6f-9b8c-1885840e6ada"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.558323 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-log-ovn\") pod \"73a97872-229d-4e6f-9b8c-1885840e6ada\" (UID: \"73a97872-229d-4e6f-9b8c-1885840e6ada\") " Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.558319 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "73a97872-229d-4e6f-9b8c-1885840e6ada" (UID: "73a97872-229d-4e6f-9b8c-1885840e6ada"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.558381 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "73a97872-229d-4e6f-9b8c-1885840e6ada" (UID: "73a97872-229d-4e6f-9b8c-1885840e6ada"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.558394 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "73a97872-229d-4e6f-9b8c-1885840e6ada" (UID: "73a97872-229d-4e6f-9b8c-1885840e6ada"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.558761 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-scripts" (OuterVolumeSpecName: "scripts") pod "73a97872-229d-4e6f-9b8c-1885840e6ada" (UID: "73a97872-229d-4e6f-9b8c-1885840e6ada"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.559483 4681 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.559507 4681 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.559517 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.559526 4681 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73a97872-229d-4e6f-9b8c-1885840e6ada-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.559537 4681 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73a97872-229d-4e6f-9b8c-1885840e6ada-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.565094 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a97872-229d-4e6f-9b8c-1885840e6ada-kube-api-access-6pkbp" (OuterVolumeSpecName: "kube-api-access-6pkbp") pod "73a97872-229d-4e6f-9b8c-1885840e6ada" (UID: "73a97872-229d-4e6f-9b8c-1885840e6ada"). InnerVolumeSpecName "kube-api-access-6pkbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:15 crc kubenswrapper[4681]: I1123 06:58:15.660362 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pkbp\" (UniqueName: \"kubernetes.io/projected/73a97872-229d-4e6f-9b8c-1885840e6ada-kube-api-access-6pkbp\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.077420 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-9xq7b" event={"ID":"73a97872-229d-4e6f-9b8c-1885840e6ada","Type":"ContainerDied","Data":"34be825530fe6a02fad0860e7c4e9baa4403097290bc0c7cb4f716b263bd7398"} Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.077482 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-9xq7b" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.077509 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34be825530fe6a02fad0860e7c4e9baa4403097290bc0c7cb4f716b263bd7398" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.452329 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n28qz-config-9xq7b"] Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.463436 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n28qz-config-9xq7b"] Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.582993 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n28qz-config-lkps7"] Nov 23 06:58:16 crc kubenswrapper[4681]: E1123 06:58:16.583690 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a97872-229d-4e6f-9b8c-1885840e6ada" containerName="ovn-config" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.583715 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a97872-229d-4e6f-9b8c-1885840e6ada" containerName="ovn-config" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.584051 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a97872-229d-4e6f-9b8c-1885840e6ada" containerName="ovn-config" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.585452 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.588051 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.604567 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz-config-lkps7"] Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.787512 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-scripts\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.787707 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run-ovn\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.787739 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzkz\" (UniqueName: \"kubernetes.io/projected/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-kube-api-access-pjzkz\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.787813 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-additional-scripts\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.788312 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-log-ovn\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.788560 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.889914 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-scripts\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.890021 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run-ovn\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.890048 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzkz\" (UniqueName: \"kubernetes.io/projected/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-kube-api-access-pjzkz\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.890115 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-additional-scripts\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.890229 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-log-ovn\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.890253 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.890539 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.891868 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run-ovn\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.891868 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-log-ovn\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.892269 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-scripts\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.892527 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-additional-scripts\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:16 crc kubenswrapper[4681]: I1123 06:58:16.909552 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzkz\" (UniqueName: \"kubernetes.io/projected/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-kube-api-access-pjzkz\") pod \"ovn-controller-n28qz-config-lkps7\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:17 crc kubenswrapper[4681]: I1123 06:58:17.203959 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:17 crc kubenswrapper[4681]: I1123 06:58:17.265736 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a97872-229d-4e6f-9b8c-1885840e6ada" path="/var/lib/kubelet/pods/73a97872-229d-4e6f-9b8c-1885840e6ada/volumes" Nov 23 06:58:17 crc kubenswrapper[4681]: I1123 06:58:17.632835 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n28qz-config-lkps7"] Nov 23 06:58:18 crc kubenswrapper[4681]: I1123 06:58:18.105556 4681 generic.go:334] "Generic (PLEG): container finished" podID="4ef3fa36-62d5-4906-9fb7-ed05b2b31640" containerID="412f5c1af48abab5ffe56a1d510d44d16887e4c0ee830d06c120bf8518a1e5b3" exitCode=0 Nov 23 06:58:18 crc kubenswrapper[4681]: I1123 06:58:18.105619 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-lkps7" event={"ID":"4ef3fa36-62d5-4906-9fb7-ed05b2b31640","Type":"ContainerDied","Data":"412f5c1af48abab5ffe56a1d510d44d16887e4c0ee830d06c120bf8518a1e5b3"} Nov 23 06:58:18 crc kubenswrapper[4681]: I1123 06:58:18.106036 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-lkps7" event={"ID":"4ef3fa36-62d5-4906-9fb7-ed05b2b31640","Type":"ContainerStarted","Data":"89e2deec4b43111dbee170d10ece5534704e23239ca6613db59589ba9dc721ad"} Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.439164 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.552190 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-scripts\") pod \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.552425 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-additional-scripts\") pod \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.552606 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run-ovn\") pod \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.553046 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run\") pod \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.553097 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-log-ovn\") pod \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.553104 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4ef3fa36-62d5-4906-9fb7-ed05b2b31640" (UID: "4ef3fa36-62d5-4906-9fb7-ed05b2b31640"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.553297 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4ef3fa36-62d5-4906-9fb7-ed05b2b31640" (UID: "4ef3fa36-62d5-4906-9fb7-ed05b2b31640"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.553281 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run" (OuterVolumeSpecName: "var-run") pod "4ef3fa36-62d5-4906-9fb7-ed05b2b31640" (UID: "4ef3fa36-62d5-4906-9fb7-ed05b2b31640"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.553545 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjzkz\" (UniqueName: \"kubernetes.io/projected/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-kube-api-access-pjzkz\") pod \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\" (UID: \"4ef3fa36-62d5-4906-9fb7-ed05b2b31640\") " Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.553816 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4ef3fa36-62d5-4906-9fb7-ed05b2b31640" (UID: "4ef3fa36-62d5-4906-9fb7-ed05b2b31640"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.554340 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-scripts" (OuterVolumeSpecName: "scripts") pod "4ef3fa36-62d5-4906-9fb7-ed05b2b31640" (UID: "4ef3fa36-62d5-4906-9fb7-ed05b2b31640"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.554820 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.554841 4681 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.554855 4681 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.554866 4681 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.554875 4681 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.561571 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-kube-api-access-pjzkz" (OuterVolumeSpecName: "kube-api-access-pjzkz") pod "4ef3fa36-62d5-4906-9fb7-ed05b2b31640" (UID: "4ef3fa36-62d5-4906-9fb7-ed05b2b31640"). InnerVolumeSpecName "kube-api-access-pjzkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:19 crc kubenswrapper[4681]: I1123 06:58:19.656701 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjzkz\" (UniqueName: \"kubernetes.io/projected/4ef3fa36-62d5-4906-9fb7-ed05b2b31640-kube-api-access-pjzkz\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.125625 4681 generic.go:334] "Generic (PLEG): container finished" podID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerID="26d05d10cbbc451df6804f6cc6bf5b505854f245655b61d41a993b45c5b09f20" exitCode=0 Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.125723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e93be3c-dcb6-4105-868c-645d5c8c7bd0","Type":"ContainerDied","Data":"26d05d10cbbc451df6804f6cc6bf5b505854f245655b61d41a993b45c5b09f20"} Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.130129 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n28qz-config-lkps7" event={"ID":"4ef3fa36-62d5-4906-9fb7-ed05b2b31640","Type":"ContainerDied","Data":"89e2deec4b43111dbee170d10ece5534704e23239ca6613db59589ba9dc721ad"} Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.130226 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e2deec4b43111dbee170d10ece5534704e23239ca6613db59589ba9dc721ad" Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.130363 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n28qz-config-lkps7" Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.133894 4681 generic.go:334] "Generic (PLEG): container finished" podID="6e2ff794-284c-406f-a815-9efec112c044" containerID="64896ff51779c881bb9362fcb20885bfa0830579b3c1525ff8fb8d8cb254da13" exitCode=0 Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.133972 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6e2ff794-284c-406f-a815-9efec112c044","Type":"ContainerDied","Data":"64896ff51779c881bb9362fcb20885bfa0830579b3c1525ff8fb8d8cb254da13"} Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.538747 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n28qz-config-lkps7"] Nov 23 06:58:20 crc kubenswrapper[4681]: I1123 06:58:20.549243 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n28qz-config-lkps7"] Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.158410 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e93be3c-dcb6-4105-868c-645d5c8c7bd0","Type":"ContainerStarted","Data":"81de7e7395ab8b3c753cb319772266a2f7aa9cd6d297a5e0aecfe387311d1ce2"} Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.160014 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.164747 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6e2ff794-284c-406f-a815-9efec112c044","Type":"ContainerStarted","Data":"f971ee3492a2f5eef0fd4413c18e2866d5d8f50d0960e4292e7667e8ea5ec95e"} Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.164975 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.199637 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371957.655157 podStartE2EDuration="1m19.199618045s" podCreationTimestamp="2025-11-23 06:57:02 +0000 UTC" firstStartedPulling="2025-11-23 06:57:04.564595318 +0000 UTC m=+761.634104556" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:21.182640793 +0000 UTC m=+838.252150021" watchObservedRunningTime="2025-11-23 06:58:21.199618045 +0000 UTC m=+838.269127272" Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.203257 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.335859187 podStartE2EDuration="1m19.203248286s" podCreationTimestamp="2025-11-23 06:57:02 +0000 UTC" firstStartedPulling="2025-11-23 06:57:04.723314678 +0000 UTC m=+761.792823915" lastFinishedPulling="2025-11-23 06:57:46.590703778 +0000 UTC m=+803.660213014" observedRunningTime="2025-11-23 06:58:21.20125484 +0000 UTC m=+838.270764076" watchObservedRunningTime="2025-11-23 06:58:21.203248286 +0000 UTC m=+838.272757513" Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.266490 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef3fa36-62d5-4906-9fb7-ed05b2b31640" path="/var/lib/kubelet/pods/4ef3fa36-62d5-4906-9fb7-ed05b2b31640/volumes" Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.905795 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:58:21 crc kubenswrapper[4681]: I1123 06:58:21.927438 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3-etc-swift\") pod \"swift-storage-0\" (UID: \"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3\") " pod="openstack/swift-storage-0" Nov 23 06:58:22 crc kubenswrapper[4681]: I1123 06:58:22.096698 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 06:58:22 crc kubenswrapper[4681]: I1123 06:58:22.996801 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:23 crc kubenswrapper[4681]: I1123 06:58:23.049441 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:23 crc kubenswrapper[4681]: I1123 06:58:23.236824 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbdxh"] Nov 23 06:58:24 crc kubenswrapper[4681]: I1123 06:58:24.195281 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pbdxh" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="registry-server" containerID="cri-o://a16fa0424bdcfa241cbd77d23d9fdd240d2dc522417905caa1f0c169d583c30c" gracePeriod=2 Nov 23 06:58:25 crc kubenswrapper[4681]: I1123 06:58:25.214815 4681 generic.go:334] "Generic (PLEG): container finished" podID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerID="a16fa0424bdcfa241cbd77d23d9fdd240d2dc522417905caa1f0c169d583c30c" exitCode=0 Nov 23 06:58:25 crc kubenswrapper[4681]: I1123 06:58:25.214902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbdxh" event={"ID":"5ac86f95-c79b-40bf-82be-a4e91bd44539","Type":"ContainerDied","Data":"a16fa0424bdcfa241cbd77d23d9fdd240d2dc522417905caa1f0c169d583c30c"} Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.171172 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.252637 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbdxh" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.261572 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbdxh" event={"ID":"5ac86f95-c79b-40bf-82be-a4e91bd44539","Type":"ContainerDied","Data":"38f4d1588b8d96fcc790380799bd9b9c885a88dcf35423d11ac2082f68d03096"} Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.261621 4681 scope.go:117] "RemoveContainer" containerID="a16fa0424bdcfa241cbd77d23d9fdd240d2dc522417905caa1f0c169d583c30c" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.283043 4681 scope.go:117] "RemoveContainer" containerID="e9aecf4d0c22afa9c63c9273f57bada72e4b4b0f1da5bf40cc617f86a6f355ab" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.309361 4681 scope.go:117] "RemoveContainer" containerID="49b564b6bc98927767f15d273197f29da5e25c714f62256f12949c6452a53590" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.334057 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-utilities\") pod \"5ac86f95-c79b-40bf-82be-a4e91bd44539\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.334214 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7897\" (UniqueName: \"kubernetes.io/projected/5ac86f95-c79b-40bf-82be-a4e91bd44539-kube-api-access-g7897\") pod \"5ac86f95-c79b-40bf-82be-a4e91bd44539\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.334298 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-catalog-content\") pod \"5ac86f95-c79b-40bf-82be-a4e91bd44539\" (UID: \"5ac86f95-c79b-40bf-82be-a4e91bd44539\") " Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.335162 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-utilities" (OuterVolumeSpecName: "utilities") pod "5ac86f95-c79b-40bf-82be-a4e91bd44539" (UID: "5ac86f95-c79b-40bf-82be-a4e91bd44539"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.338935 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac86f95-c79b-40bf-82be-a4e91bd44539-kube-api-access-g7897" (OuterVolumeSpecName: "kube-api-access-g7897") pod "5ac86f95-c79b-40bf-82be-a4e91bd44539" (UID: "5ac86f95-c79b-40bf-82be-a4e91bd44539"). InnerVolumeSpecName "kube-api-access-g7897". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.411292 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ac86f95-c79b-40bf-82be-a4e91bd44539" (UID: "5ac86f95-c79b-40bf-82be-a4e91bd44539"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.436752 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7897\" (UniqueName: \"kubernetes.io/projected/5ac86f95-c79b-40bf-82be-a4e91bd44539-kube-api-access-g7897\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.436785 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.436795 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac86f95-c79b-40bf-82be-a4e91bd44539-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.458905 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 06:58:29 crc kubenswrapper[4681]: W1123 06:58:29.459158 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6ee6071_8297_4e7e_9c1c_c16b9c7b2ec3.slice/crio-20c933803b0aa007cf7b9c6bf0393c3fe1197af255309cd5ca1bb6aa1a75ef40 WatchSource:0}: Error finding container 20c933803b0aa007cf7b9c6bf0393c3fe1197af255309cd5ca1bb6aa1a75ef40: Status 404 returned error can't find the container with id 20c933803b0aa007cf7b9c6bf0393c3fe1197af255309cd5ca1bb6aa1a75ef40 Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.587599 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbdxh"] Nov 23 06:58:29 crc kubenswrapper[4681]: I1123 06:58:29.591753 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pbdxh"] Nov 23 06:58:30 crc kubenswrapper[4681]: I1123 06:58:30.262557 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"20c933803b0aa007cf7b9c6bf0393c3fe1197af255309cd5ca1bb6aa1a75ef40"} Nov 23 06:58:30 crc kubenswrapper[4681]: I1123 06:58:30.264991 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-glbxl" event={"ID":"60d0f758-c36c-459d-90ac-326fbf9faa1c","Type":"ContainerStarted","Data":"81ae5e7ab8b2ac99496733254b981c9cddfe97b64c568072e24b2715fe5f4753"} Nov 23 06:58:31 crc kubenswrapper[4681]: I1123 06:58:31.261357 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" path="/var/lib/kubelet/pods/5ac86f95-c79b-40bf-82be-a4e91bd44539/volumes" Nov 23 06:58:31 crc kubenswrapper[4681]: I1123 06:58:31.277587 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"69cf8eb1c0f5fbb9454c9b124c4d293f8f31803b2df3beb7e2e58ac579c2c309"} Nov 23 06:58:31 crc kubenswrapper[4681]: I1123 06:58:31.277657 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"f0e8f014fc1659959cb20d211192c506d22b2d27165b451d6bb2cb11359e45e1"} Nov 23 06:58:32 crc kubenswrapper[4681]: I1123 06:58:32.293160 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"ba31c724a71efab58049ea9ea400cf4bb9452917a0d3613903e85723d97a3e15"} Nov 23 06:58:32 crc kubenswrapper[4681]: I1123 06:58:32.293565 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"370a62574cd4a7170a9621a31e29fef92c661dfbe873b64c5bf50a021385e94c"} Nov 23 06:58:33 crc kubenswrapper[4681]: I1123 06:58:33.288377 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-glbxl" podStartSLOduration=5.6851932309999995 podStartE2EDuration="26.288352175s" podCreationTimestamp="2025-11-23 06:58:07 +0000 UTC" firstStartedPulling="2025-11-23 06:58:08.332089588 +0000 UTC m=+825.401598826" lastFinishedPulling="2025-11-23 06:58:28.935248533 +0000 UTC m=+846.004757770" observedRunningTime="2025-11-23 06:58:30.282672884 +0000 UTC m=+847.352182121" watchObservedRunningTime="2025-11-23 06:58:33.288352175 +0000 UTC m=+850.357861412" Nov 23 06:58:33 crc kubenswrapper[4681]: I1123 06:58:33.314971 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"9b7d8e3aa5833f660b3259d8a266180b4e0963ba808c76a487294ff10ac4deb1"} Nov 23 06:58:33 crc kubenswrapper[4681]: I1123 06:58:33.315033 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"34d67020aa9cbb0f48c7bcfdc08c0da765cd7398fa0f0588377f856e3eb59426"} Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.015655 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.092079 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.404936 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"89b3aea54b25e77795df6199ca655c137c18e2bbf2889b9e49b0ab3abb886769"} Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.405280 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"67e5ba1849a63b033044710ec4e115c2d273d7acced73b37130bc719abf56886"} Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.419562 4681 generic.go:334] "Generic (PLEG): container finished" podID="60d0f758-c36c-459d-90ac-326fbf9faa1c" containerID="81ae5e7ab8b2ac99496733254b981c9cddfe97b64c568072e24b2715fe5f4753" exitCode=0 Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.419596 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-glbxl" event={"ID":"60d0f758-c36c-459d-90ac-326fbf9faa1c","Type":"ContainerDied","Data":"81ae5e7ab8b2ac99496733254b981c9cddfe97b64c568072e24b2715fe5f4753"} Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.444063 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4wvpc"] Nov 23 06:58:34 crc kubenswrapper[4681]: E1123 06:58:34.444514 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef3fa36-62d5-4906-9fb7-ed05b2b31640" containerName="ovn-config" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.444528 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef3fa36-62d5-4906-9fb7-ed05b2b31640" containerName="ovn-config" Nov 23 06:58:34 crc kubenswrapper[4681]: E1123 06:58:34.444555 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="registry-server" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.444561 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="registry-server" Nov 23 06:58:34 crc kubenswrapper[4681]: E1123 06:58:34.444574 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="extract-utilities" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.444580 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="extract-utilities" Nov 23 06:58:34 crc kubenswrapper[4681]: E1123 06:58:34.444611 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="extract-content" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.444617 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="extract-content" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.444818 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac86f95-c79b-40bf-82be-a4e91bd44539" containerName="registry-server" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.444861 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef3fa36-62d5-4906-9fb7-ed05b2b31640" containerName="ovn-config" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.445495 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.460030 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-afdb-account-create-p4xln"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.460971 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.462552 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.471910 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4wvpc"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.478167 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-afdb-account-create-p4xln"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.530947 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqth\" (UniqueName: \"kubernetes.io/projected/d039d81e-cd53-46e4-af64-12e2662c78ba-kube-api-access-fgqth\") pod \"cinder-db-create-4wvpc\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.531009 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6301e8d9-766f-447e-a721-6fd63dabc5e2-operator-scripts\") pod \"cinder-afdb-account-create-p4xln\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.531080 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d039d81e-cd53-46e4-af64-12e2662c78ba-operator-scripts\") pod \"cinder-db-create-4wvpc\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.531098 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp697\" (UniqueName: \"kubernetes.io/projected/6301e8d9-766f-447e-a721-6fd63dabc5e2-kube-api-access-fp697\") pod \"cinder-afdb-account-create-p4xln\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.568741 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-q2zbz"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.570239 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.592366 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q2zbz"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.634703 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d039d81e-cd53-46e4-af64-12e2662c78ba-operator-scripts\") pod \"cinder-db-create-4wvpc\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.634761 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp697\" (UniqueName: \"kubernetes.io/projected/6301e8d9-766f-447e-a721-6fd63dabc5e2-kube-api-access-fp697\") pod \"cinder-afdb-account-create-p4xln\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.634911 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgqth\" (UniqueName: \"kubernetes.io/projected/d039d81e-cd53-46e4-af64-12e2662c78ba-kube-api-access-fgqth\") pod \"cinder-db-create-4wvpc\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.634943 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5ml9\" (UniqueName: \"kubernetes.io/projected/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-kube-api-access-q5ml9\") pod \"barbican-db-create-q2zbz\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.635007 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6301e8d9-766f-447e-a721-6fd63dabc5e2-operator-scripts\") pod \"cinder-afdb-account-create-p4xln\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.635029 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-operator-scripts\") pod \"barbican-db-create-q2zbz\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.635871 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d039d81e-cd53-46e4-af64-12e2662c78ba-operator-scripts\") pod \"cinder-db-create-4wvpc\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.636800 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6301e8d9-766f-447e-a721-6fd63dabc5e2-operator-scripts\") pod \"cinder-afdb-account-create-p4xln\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.643651 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-4xttv"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.644939 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.655892 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-773a-account-create-4dks9"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.657073 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.660773 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.674283 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-773a-account-create-4dks9"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.692903 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgqth\" (UniqueName: \"kubernetes.io/projected/d039d81e-cd53-46e4-af64-12e2662c78ba-kube-api-access-fgqth\") pod \"cinder-db-create-4wvpc\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.693941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp697\" (UniqueName: \"kubernetes.io/projected/6301e8d9-766f-447e-a721-6fd63dabc5e2-kube-api-access-fp697\") pod \"cinder-afdb-account-create-p4xln\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.736539 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5ml9\" (UniqueName: \"kubernetes.io/projected/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-kube-api-access-q5ml9\") pod \"barbican-db-create-q2zbz\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.736589 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d9ea06-43fe-41a8-b588-178c01182a70-operator-scripts\") pod \"heat-773a-account-create-4dks9\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.736617 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qp47\" (UniqueName: \"kubernetes.io/projected/d1d9ea06-43fe-41a8-b588-178c01182a70-kube-api-access-5qp47\") pod \"heat-773a-account-create-4dks9\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.736654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-operator-scripts\") pod \"barbican-db-create-q2zbz\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.736683 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxsg\" (UniqueName: \"kubernetes.io/projected/8ffee548-f423-43f1-955e-4017e65eb1b4-kube-api-access-cjxsg\") pod \"heat-db-create-4xttv\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.736779 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffee548-f423-43f1-955e-4017e65eb1b4-operator-scripts\") pod \"heat-db-create-4xttv\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.742356 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-operator-scripts\") pod \"barbican-db-create-q2zbz\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.762633 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.775724 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.798542 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-4xttv"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.818125 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5ml9\" (UniqueName: \"kubernetes.io/projected/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-kube-api-access-q5ml9\") pod \"barbican-db-create-q2zbz\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.839116 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffee548-f423-43f1-955e-4017e65eb1b4-operator-scripts\") pod \"heat-db-create-4xttv\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.839402 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d9ea06-43fe-41a8-b588-178c01182a70-operator-scripts\") pod \"heat-773a-account-create-4dks9\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.840851 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qp47\" (UniqueName: \"kubernetes.io/projected/d1d9ea06-43fe-41a8-b588-178c01182a70-kube-api-access-5qp47\") pod \"heat-773a-account-create-4dks9\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.840994 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxsg\" (UniqueName: \"kubernetes.io/projected/8ffee548-f423-43f1-955e-4017e65eb1b4-kube-api-access-cjxsg\") pod \"heat-db-create-4xttv\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.840680 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d9ea06-43fe-41a8-b588-178c01182a70-operator-scripts\") pod \"heat-773a-account-create-4dks9\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.840144 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffee548-f423-43f1-955e-4017e65eb1b4-operator-scripts\") pod \"heat-db-create-4xttv\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.840621 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a3aa-account-create-bwsrj"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.843254 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.852898 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.865981 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qp47\" (UniqueName: \"kubernetes.io/projected/d1d9ea06-43fe-41a8-b588-178c01182a70-kube-api-access-5qp47\") pod \"heat-773a-account-create-4dks9\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.894716 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a3aa-account-create-bwsrj"] Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.895174 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.896759 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxsg\" (UniqueName: \"kubernetes.io/projected/8ffee548-f423-43f1-955e-4017e65eb1b4-kube-api-access-cjxsg\") pod \"heat-db-create-4xttv\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.943001 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6234ce9d-9669-4dbf-957d-7bfd7158639b-operator-scripts\") pod \"barbican-a3aa-account-create-bwsrj\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.943107 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxp7\" (UniqueName: \"kubernetes.io/projected/6234ce9d-9669-4dbf-957d-7bfd7158639b-kube-api-access-vmxp7\") pod \"barbican-a3aa-account-create-bwsrj\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.978859 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4xttv" Nov 23 06:58:34 crc kubenswrapper[4681]: I1123 06:58:34.982744 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.045286 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6234ce9d-9669-4dbf-957d-7bfd7158639b-operator-scripts\") pod \"barbican-a3aa-account-create-bwsrj\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.046441 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxp7\" (UniqueName: \"kubernetes.io/projected/6234ce9d-9669-4dbf-957d-7bfd7158639b-kube-api-access-vmxp7\") pod \"barbican-a3aa-account-create-bwsrj\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.046257 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6234ce9d-9669-4dbf-957d-7bfd7158639b-operator-scripts\") pod \"barbican-a3aa-account-create-bwsrj\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.048539 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-j55vj"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.049879 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.055564 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4hbr9"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.064719 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j55vj"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.064827 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.066568 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.070800 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.071256 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k72qg" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.075108 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.088055 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxp7\" (UniqueName: \"kubernetes.io/projected/6234ce9d-9669-4dbf-957d-7bfd7158639b-kube-api-access-vmxp7\") pod \"barbican-a3aa-account-create-bwsrj\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.114582 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4hbr9"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.150689 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-combined-ca-bundle\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.150870 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjjnb\" (UniqueName: \"kubernetes.io/projected/7cdb5691-8434-40bd-9103-1ebee6a25d76-kube-api-access-mjjnb\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.150899 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da18abb7-6233-4690-acad-41137f3ba686-operator-scripts\") pod \"neutron-db-create-j55vj\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.150922 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-config-data\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.150943 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrt6\" (UniqueName: \"kubernetes.io/projected/da18abb7-6233-4690-acad-41137f3ba686-kube-api-access-ptrt6\") pod \"neutron-db-create-j55vj\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.208164 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.252306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-combined-ca-bundle\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.252381 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjjnb\" (UniqueName: \"kubernetes.io/projected/7cdb5691-8434-40bd-9103-1ebee6a25d76-kube-api-access-mjjnb\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.252410 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da18abb7-6233-4690-acad-41137f3ba686-operator-scripts\") pod \"neutron-db-create-j55vj\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.252439 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-config-data\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.252472 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrt6\" (UniqueName: \"kubernetes.io/projected/da18abb7-6233-4690-acad-41137f3ba686-kube-api-access-ptrt6\") pod \"neutron-db-create-j55vj\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.253677 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da18abb7-6233-4690-acad-41137f3ba686-operator-scripts\") pod \"neutron-db-create-j55vj\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.270859 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-config-data\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.271933 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-combined-ca-bundle\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.276102 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrt6\" (UniqueName: \"kubernetes.io/projected/da18abb7-6233-4690-acad-41137f3ba686-kube-api-access-ptrt6\") pod \"neutron-db-create-j55vj\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.277856 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjjnb\" (UniqueName: \"kubernetes.io/projected/7cdb5691-8434-40bd-9103-1ebee6a25d76-kube-api-access-mjjnb\") pod \"keystone-db-sync-4hbr9\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.335525 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-340e-account-create-5vbwn"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.339443 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.341743 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.376581 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-340e-account-create-5vbwn"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.384102 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.406272 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.457619 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2cbf1de-321e-4495-b9ab-b2e4c9758321-operator-scripts\") pod \"neutron-340e-account-create-5vbwn\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.457670 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrbs\" (UniqueName: \"kubernetes.io/projected/a2cbf1de-321e-4495-b9ab-b2e4c9758321-kube-api-access-jtrbs\") pod \"neutron-340e-account-create-5vbwn\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.559908 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2cbf1de-321e-4495-b9ab-b2e4c9758321-operator-scripts\") pod \"neutron-340e-account-create-5vbwn\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.559951 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrbs\" (UniqueName: \"kubernetes.io/projected/a2cbf1de-321e-4495-b9ab-b2e4c9758321-kube-api-access-jtrbs\") pod \"neutron-340e-account-create-5vbwn\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.561535 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2cbf1de-321e-4495-b9ab-b2e4c9758321-operator-scripts\") pod \"neutron-340e-account-create-5vbwn\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.576033 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrbs\" (UniqueName: \"kubernetes.io/projected/a2cbf1de-321e-4495-b9ab-b2e4c9758321-kube-api-access-jtrbs\") pod \"neutron-340e-account-create-5vbwn\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.672348 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.747543 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q2zbz"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.772630 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4wvpc"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.790142 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-afdb-account-create-p4xln"] Nov 23 06:58:35 crc kubenswrapper[4681]: I1123 06:58:35.899331 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-4xttv"] Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.075376 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-773a-account-create-4dks9"] Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.119500 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j55vj"] Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.131770 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a3aa-account-create-bwsrj"] Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.333433 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4hbr9"] Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.423575 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-340e-account-create-5vbwn"] Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.469929 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-glbxl" event={"ID":"60d0f758-c36c-459d-90ac-326fbf9faa1c","Type":"ContainerDied","Data":"eb8fde526801484a45923f12e09de005b90608ff7b585b316d034ce9b2bcfb91"} Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.469974 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb8fde526801484a45923f12e09de005b90608ff7b585b316d034ce9b2bcfb91" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.471107 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-4xttv" event={"ID":"8ffee548-f423-43f1-955e-4017e65eb1b4","Type":"ContainerStarted","Data":"08ec8c32b1c80eef440c5c0178856eeb3c321ecc6a57a0314148d0accaee99af"} Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.472293 4681 generic.go:334] "Generic (PLEG): container finished" podID="b6c8c95d-15d6-4b0c-bed1-b49e147f5af9" containerID="b3f3890034db190f4eaf8b2995c00966dc2ee9134c97a8a679fd748324fa7b28" exitCode=0 Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.472341 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q2zbz" event={"ID":"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9","Type":"ContainerDied","Data":"b3f3890034db190f4eaf8b2995c00966dc2ee9134c97a8a679fd748324fa7b28"} Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.472357 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q2zbz" event={"ID":"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9","Type":"ContainerStarted","Data":"cf8a1ad23afdca2b770bfb70bb8f320660a362a2a41616317903f45e02af51c3"} Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.475236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-afdb-account-create-p4xln" event={"ID":"6301e8d9-766f-447e-a721-6fd63dabc5e2","Type":"ContainerStarted","Data":"3533c253c9fb422c916f155873a2cb78d220a6f9dc2713c45c6c67d32bde19f0"} Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.475431 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-afdb-account-create-p4xln" event={"ID":"6301e8d9-766f-447e-a721-6fd63dabc5e2","Type":"ContainerStarted","Data":"06db2686fccd700d7fc11087bacd03e3782559cba9699c8ea3840c8d87d84472"} Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.481377 4681 generic.go:334] "Generic (PLEG): container finished" podID="d039d81e-cd53-46e4-af64-12e2662c78ba" containerID="5d11b103752e85d88efb4caba8ad8e32a71b412b2aff6ce6ea8e9d9bab58a550" exitCode=0 Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.481411 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wvpc" event={"ID":"d039d81e-cd53-46e4-af64-12e2662c78ba","Type":"ContainerDied","Data":"5d11b103752e85d88efb4caba8ad8e32a71b412b2aff6ce6ea8e9d9bab58a550"} Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.481430 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wvpc" event={"ID":"d039d81e-cd53-46e4-af64-12e2662c78ba","Type":"ContainerStarted","Data":"7f73803135f7452a402df70f947b68b74a5a36bb8435bcc226152a1273b489d6"} Nov 23 06:58:36 crc kubenswrapper[4681]: W1123 06:58:36.483021 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2cbf1de_321e_4495_b9ab_b2e4c9758321.slice/crio-92e7c4c27cba440b6106d35ab0fd75980b51e22231f54556d06780bfe5d0fce2 WatchSource:0}: Error finding container 92e7c4c27cba440b6106d35ab0fd75980b51e22231f54556d06780bfe5d0fce2: Status 404 returned error can't find the container with id 92e7c4c27cba440b6106d35ab0fd75980b51e22231f54556d06780bfe5d0fce2 Nov 23 06:58:36 crc kubenswrapper[4681]: W1123 06:58:36.483377 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cdb5691_8434_40bd_9103_1ebee6a25d76.slice/crio-0082545b578e996bd33fbe2177849140cc34e802006b08cebef573c913b55267 WatchSource:0}: Error finding container 0082545b578e996bd33fbe2177849140cc34e802006b08cebef573c913b55267: Status 404 returned error can't find the container with id 0082545b578e996bd33fbe2177849140cc34e802006b08cebef573c913b55267 Nov 23 06:58:36 crc kubenswrapper[4681]: W1123 06:58:36.491986 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1d9ea06_43fe_41a8_b588_178c01182a70.slice/crio-a3db5eac393be328f74130c99fe75f760a2ba783e769e2bf079a84dda3da3ab2 WatchSource:0}: Error finding container a3db5eac393be328f74130c99fe75f760a2ba783e769e2bf079a84dda3da3ab2: Status 404 returned error can't find the container with id a3db5eac393be328f74130c99fe75f760a2ba783e769e2bf079a84dda3da3ab2 Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.506131 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-afdb-account-create-p4xln" podStartSLOduration=2.506118697 podStartE2EDuration="2.506118697s" podCreationTimestamp="2025-11-23 06:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:36.498177995 +0000 UTC m=+853.567687231" watchObservedRunningTime="2025-11-23 06:58:36.506118697 +0000 UTC m=+853.575627934" Nov 23 06:58:36 crc kubenswrapper[4681]: W1123 06:58:36.528566 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6234ce9d_9669_4dbf_957d_7bfd7158639b.slice/crio-20ef50971b97693102f03345f4745c45d461784d1e25789d7e69b4b2e513e039 WatchSource:0}: Error finding container 20ef50971b97693102f03345f4745c45d461784d1e25789d7e69b4b2e513e039: Status 404 returned error can't find the container with id 20ef50971b97693102f03345f4745c45d461784d1e25789d7e69b4b2e513e039 Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.532982 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.588000 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745k2\" (UniqueName: \"kubernetes.io/projected/60d0f758-c36c-459d-90ac-326fbf9faa1c-kube-api-access-745k2\") pod \"60d0f758-c36c-459d-90ac-326fbf9faa1c\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.588308 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-combined-ca-bundle\") pod \"60d0f758-c36c-459d-90ac-326fbf9faa1c\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.588392 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-db-sync-config-data\") pod \"60d0f758-c36c-459d-90ac-326fbf9faa1c\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.588565 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-config-data\") pod \"60d0f758-c36c-459d-90ac-326fbf9faa1c\" (UID: \"60d0f758-c36c-459d-90ac-326fbf9faa1c\") " Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.596878 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d0f758-c36c-459d-90ac-326fbf9faa1c-kube-api-access-745k2" (OuterVolumeSpecName: "kube-api-access-745k2") pod "60d0f758-c36c-459d-90ac-326fbf9faa1c" (UID: "60d0f758-c36c-459d-90ac-326fbf9faa1c"). InnerVolumeSpecName "kube-api-access-745k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.606699 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "60d0f758-c36c-459d-90ac-326fbf9faa1c" (UID: "60d0f758-c36c-459d-90ac-326fbf9faa1c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.611712 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60d0f758-c36c-459d-90ac-326fbf9faa1c" (UID: "60d0f758-c36c-459d-90ac-326fbf9faa1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.695356 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745k2\" (UniqueName: \"kubernetes.io/projected/60d0f758-c36c-459d-90ac-326fbf9faa1c-kube-api-access-745k2\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.695380 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.695390 4681 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.797955 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-config-data" (OuterVolumeSpecName: "config-data") pod "60d0f758-c36c-459d-90ac-326fbf9faa1c" (UID: "60d0f758-c36c-459d-90ac-326fbf9faa1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:36 crc kubenswrapper[4681]: I1123 06:58:36.898669 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d0f758-c36c-459d-90ac-326fbf9faa1c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.499378 4681 generic.go:334] "Generic (PLEG): container finished" podID="8ffee548-f423-43f1-955e-4017e65eb1b4" containerID="4c03e16e1b1a18b641ddf0a6727c0267f55102912d0b40ae79845c6ae410eda0" exitCode=0 Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.499631 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-4xttv" event={"ID":"8ffee548-f423-43f1-955e-4017e65eb1b4","Type":"ContainerDied","Data":"4c03e16e1b1a18b641ddf0a6727c0267f55102912d0b40ae79845c6ae410eda0"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.501684 4681 generic.go:334] "Generic (PLEG): container finished" podID="a2cbf1de-321e-4495-b9ab-b2e4c9758321" containerID="87639a698335f091f4d7951e41ac561500d62b8e18ce8bea7045caefbbbfa662" exitCode=0 Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.501838 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-340e-account-create-5vbwn" event={"ID":"a2cbf1de-321e-4495-b9ab-b2e4c9758321","Type":"ContainerDied","Data":"87639a698335f091f4d7951e41ac561500d62b8e18ce8bea7045caefbbbfa662"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.501903 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-340e-account-create-5vbwn" event={"ID":"a2cbf1de-321e-4495-b9ab-b2e4c9758321","Type":"ContainerStarted","Data":"92e7c4c27cba440b6106d35ab0fd75980b51e22231f54556d06780bfe5d0fce2"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.505800 4681 generic.go:334] "Generic (PLEG): container finished" podID="da18abb7-6233-4690-acad-41137f3ba686" containerID="c3ed2852f4355225316daa379e72c1fff794894985c89ce543fd5d11f7fec8a4" exitCode=0 Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.505867 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j55vj" event={"ID":"da18abb7-6233-4690-acad-41137f3ba686","Type":"ContainerDied","Data":"c3ed2852f4355225316daa379e72c1fff794894985c89ce543fd5d11f7fec8a4"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.505892 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j55vj" event={"ID":"da18abb7-6233-4690-acad-41137f3ba686","Type":"ContainerStarted","Data":"8ccc87541d3c15a2b9bcd8b36f9843d84e488bfc33e174909cdd801ea0332f46"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.515642 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"c37a44d81d08fd549d54379723b3aa8133099769842f577e4bfa27ac37f6f95d"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.515754 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"d47c22ca3764e6707d07ad3737248b326f5dff529e656bf3bc08e9dbcbd715ef"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.515821 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"ced795d8244fc016a35e89c642678573dc4457756eafbb791085e17ca0afa343"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.517271 4681 generic.go:334] "Generic (PLEG): container finished" podID="6301e8d9-766f-447e-a721-6fd63dabc5e2" containerID="3533c253c9fb422c916f155873a2cb78d220a6f9dc2713c45c6c67d32bde19f0" exitCode=0 Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.517363 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-afdb-account-create-p4xln" event={"ID":"6301e8d9-766f-447e-a721-6fd63dabc5e2","Type":"ContainerDied","Data":"3533c253c9fb422c916f155873a2cb78d220a6f9dc2713c45c6c67d32bde19f0"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.518604 4681 generic.go:334] "Generic (PLEG): container finished" podID="d1d9ea06-43fe-41a8-b588-178c01182a70" containerID="5e6cc19701d2deefc40d5f97142a965eb33c5fbedd505213e104c8964b30492a" exitCode=0 Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.518718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-773a-account-create-4dks9" event={"ID":"d1d9ea06-43fe-41a8-b588-178c01182a70","Type":"ContainerDied","Data":"5e6cc19701d2deefc40d5f97142a965eb33c5fbedd505213e104c8964b30492a"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.518794 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-773a-account-create-4dks9" event={"ID":"d1d9ea06-43fe-41a8-b588-178c01182a70","Type":"ContainerStarted","Data":"a3db5eac393be328f74130c99fe75f760a2ba783e769e2bf079a84dda3da3ab2"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.520864 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4hbr9" event={"ID":"7cdb5691-8434-40bd-9103-1ebee6a25d76","Type":"ContainerStarted","Data":"0082545b578e996bd33fbe2177849140cc34e802006b08cebef573c913b55267"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.523589 4681 generic.go:334] "Generic (PLEG): container finished" podID="6234ce9d-9669-4dbf-957d-7bfd7158639b" containerID="0a7c66b2e81c222f4494a82122fc66d89d15d0355e29d4a7200380d7fb3dee10" exitCode=0 Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.523612 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a3aa-account-create-bwsrj" event={"ID":"6234ce9d-9669-4dbf-957d-7bfd7158639b","Type":"ContainerDied","Data":"0a7c66b2e81c222f4494a82122fc66d89d15d0355e29d4a7200380d7fb3dee10"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.523749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a3aa-account-create-bwsrj" event={"ID":"6234ce9d-9669-4dbf-957d-7bfd7158639b","Type":"ContainerStarted","Data":"20ef50971b97693102f03345f4745c45d461784d1e25789d7e69b4b2e513e039"} Nov 23 06:58:37 crc kubenswrapper[4681]: I1123 06:58:37.523799 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-glbxl" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.027761 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89d747df-57hzx"] Nov 23 06:58:38 crc kubenswrapper[4681]: E1123 06:58:38.028347 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d0f758-c36c-459d-90ac-326fbf9faa1c" containerName="glance-db-sync" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.028362 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d0f758-c36c-459d-90ac-326fbf9faa1c" containerName="glance-db-sync" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.028592 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d0f758-c36c-459d-90ac-326fbf9faa1c" containerName="glance-db-sync" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.029409 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.070583 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89d747df-57hzx"] Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.107308 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.158783 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-config\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.158898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-nb\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.158948 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khxkh\" (UniqueName: \"kubernetes.io/projected/f9518d64-11d8-4322-a18a-06ba9c7c2824-kube-api-access-khxkh\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.159015 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-sb\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.159112 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-dns-svc\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.260707 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d039d81e-cd53-46e4-af64-12e2662c78ba-operator-scripts\") pod \"d039d81e-cd53-46e4-af64-12e2662c78ba\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.260841 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgqth\" (UniqueName: \"kubernetes.io/projected/d039d81e-cd53-46e4-af64-12e2662c78ba-kube-api-access-fgqth\") pod \"d039d81e-cd53-46e4-af64-12e2662c78ba\" (UID: \"d039d81e-cd53-46e4-af64-12e2662c78ba\") " Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.261280 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-dns-svc\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.261516 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-config\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.261609 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-nb\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.261650 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khxkh\" (UniqueName: \"kubernetes.io/projected/f9518d64-11d8-4322-a18a-06ba9c7c2824-kube-api-access-khxkh\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.261747 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-sb\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.263145 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-dns-svc\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.263821 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d039d81e-cd53-46e4-af64-12e2662c78ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d039d81e-cd53-46e4-af64-12e2662c78ba" (UID: "d039d81e-cd53-46e4-af64-12e2662c78ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.264844 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-config\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.267420 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-nb\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.268812 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-sb\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.270312 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d039d81e-cd53-46e4-af64-12e2662c78ba-kube-api-access-fgqth" (OuterVolumeSpecName: "kube-api-access-fgqth") pod "d039d81e-cd53-46e4-af64-12e2662c78ba" (UID: "d039d81e-cd53-46e4-af64-12e2662c78ba"). InnerVolumeSpecName "kube-api-access-fgqth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.281225 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khxkh\" (UniqueName: \"kubernetes.io/projected/f9518d64-11d8-4322-a18a-06ba9c7c2824-kube-api-access-khxkh\") pod \"dnsmasq-dns-89d747df-57hzx\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.373455 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d039d81e-cd53-46e4-af64-12e2662c78ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.373767 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgqth\" (UniqueName: \"kubernetes.io/projected/d039d81e-cd53-46e4-af64-12e2662c78ba-kube-api-access-fgqth\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.415169 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.578974 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"a505af6479aa21d34e35ca5a2ec1df35ac1a8820949989193378b43be8f9b9e0"} Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.579038 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"68f9a7c57a57bc626df73a55b3b132d23406fdde9fb59edfddf2916ed8267ce1"} Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.579053 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"6b304190987fd78638aba9bd50068450f67f55bb60404c4f84b71298ed4ba159"} Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.579062 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6ee6071-8297-4e7e-9c1c-c16b9c7b2ec3","Type":"ContainerStarted","Data":"2411d59afecde4afd12f925631b5e0232bf0fe3afe547e2f272740a2a86cf377"} Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.591276 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wvpc" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.591694 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wvpc" event={"ID":"d039d81e-cd53-46e4-af64-12e2662c78ba","Type":"ContainerDied","Data":"7f73803135f7452a402df70f947b68b74a5a36bb8435bcc226152a1273b489d6"} Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.591744 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f73803135f7452a402df70f947b68b74a5a36bb8435bcc226152a1273b489d6" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.617383 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.691346 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.671544843 podStartE2EDuration="50.691317737s" podCreationTimestamp="2025-11-23 06:57:48 +0000 UTC" firstStartedPulling="2025-11-23 06:58:29.462553592 +0000 UTC m=+846.532062829" lastFinishedPulling="2025-11-23 06:58:36.482326487 +0000 UTC m=+853.551835723" observedRunningTime="2025-11-23 06:58:38.654808253 +0000 UTC m=+855.724317490" watchObservedRunningTime="2025-11-23 06:58:38.691317737 +0000 UTC m=+855.760826974" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.697758 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5ml9\" (UniqueName: \"kubernetes.io/projected/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-kube-api-access-q5ml9\") pod \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.697960 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-operator-scripts\") pod \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\" (UID: \"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9\") " Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.702175 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6c8c95d-15d6-4b0c-bed1-b49e147f5af9" (UID: "b6c8c95d-15d6-4b0c-bed1-b49e147f5af9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.710620 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-kube-api-access-q5ml9" (OuterVolumeSpecName: "kube-api-access-q5ml9") pod "b6c8c95d-15d6-4b0c-bed1-b49e147f5af9" (UID: "b6c8c95d-15d6-4b0c-bed1-b49e147f5af9"). InnerVolumeSpecName "kube-api-access-q5ml9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.801070 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5ml9\" (UniqueName: \"kubernetes.io/projected/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-kube-api-access-q5ml9\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:38 crc kubenswrapper[4681]: I1123 06:58:38.801108 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.001233 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89d747df-57hzx"] Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.037284 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-648ff47655-tp296"] Nov 23 06:58:39 crc kubenswrapper[4681]: E1123 06:58:39.037802 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d039d81e-cd53-46e4-af64-12e2662c78ba" containerName="mariadb-database-create" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.037820 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d039d81e-cd53-46e4-af64-12e2662c78ba" containerName="mariadb-database-create" Nov 23 06:58:39 crc kubenswrapper[4681]: E1123 06:58:39.037852 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c8c95d-15d6-4b0c-bed1-b49e147f5af9" containerName="mariadb-database-create" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.037859 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c8c95d-15d6-4b0c-bed1-b49e147f5af9" containerName="mariadb-database-create" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.038012 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d039d81e-cd53-46e4-af64-12e2662c78ba" containerName="mariadb-database-create" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.038024 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c8c95d-15d6-4b0c-bed1-b49e147f5af9" containerName="mariadb-database-create" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.038954 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.041186 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.069896 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-648ff47655-tp296"] Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.123636 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b277k\" (UniqueName: \"kubernetes.io/projected/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-kube-api-access-b277k\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.123686 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-nb\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.123744 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-svc\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.123770 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-config\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.123853 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-swift-storage-0\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.123911 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-sb\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.215585 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.225648 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-svc\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.225854 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-config\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.225974 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-swift-storage-0\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.226056 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-sb\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.226326 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b277k\" (UniqueName: \"kubernetes.io/projected/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-kube-api-access-b277k\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.226409 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-nb\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.226660 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-svc\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.227813 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-nb\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.228522 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-swift-storage-0\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.229340 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-config\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.229950 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-sb\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.249388 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b277k\" (UniqueName: \"kubernetes.io/projected/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-kube-api-access-b277k\") pod \"dnsmasq-dns-648ff47655-tp296\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.330130 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrbs\" (UniqueName: \"kubernetes.io/projected/a2cbf1de-321e-4495-b9ab-b2e4c9758321-kube-api-access-jtrbs\") pod \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.330756 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2cbf1de-321e-4495-b9ab-b2e4c9758321-operator-scripts\") pod \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\" (UID: \"a2cbf1de-321e-4495-b9ab-b2e4c9758321\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.331169 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2cbf1de-321e-4495-b9ab-b2e4c9758321-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2cbf1de-321e-4495-b9ab-b2e4c9758321" (UID: "a2cbf1de-321e-4495-b9ab-b2e4c9758321"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.332282 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2cbf1de-321e-4495-b9ab-b2e4c9758321-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.334007 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cbf1de-321e-4495-b9ab-b2e4c9758321-kube-api-access-jtrbs" (OuterVolumeSpecName: "kube-api-access-jtrbs") pod "a2cbf1de-321e-4495-b9ab-b2e4c9758321" (UID: "a2cbf1de-321e-4495-b9ab-b2e4c9758321"). InnerVolumeSpecName "kube-api-access-jtrbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.409876 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.438873 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtrbs\" (UniqueName: \"kubernetes.io/projected/a2cbf1de-321e-4495-b9ab-b2e4c9758321-kube-api-access-jtrbs\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.456362 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.468251 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.472960 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.479290 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4xttv" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.497252 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540179 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptrt6\" (UniqueName: \"kubernetes.io/projected/da18abb7-6233-4690-acad-41137f3ba686-kube-api-access-ptrt6\") pod \"da18abb7-6233-4690-acad-41137f3ba686\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540282 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjxsg\" (UniqueName: \"kubernetes.io/projected/8ffee548-f423-43f1-955e-4017e65eb1b4-kube-api-access-cjxsg\") pod \"8ffee548-f423-43f1-955e-4017e65eb1b4\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540303 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qp47\" (UniqueName: \"kubernetes.io/projected/d1d9ea06-43fe-41a8-b588-178c01182a70-kube-api-access-5qp47\") pod \"d1d9ea06-43fe-41a8-b588-178c01182a70\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540393 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d9ea06-43fe-41a8-b588-178c01182a70-operator-scripts\") pod \"d1d9ea06-43fe-41a8-b588-178c01182a70\" (UID: \"d1d9ea06-43fe-41a8-b588-178c01182a70\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540521 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6301e8d9-766f-447e-a721-6fd63dabc5e2-operator-scripts\") pod \"6301e8d9-766f-447e-a721-6fd63dabc5e2\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540570 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffee548-f423-43f1-955e-4017e65eb1b4-operator-scripts\") pod \"8ffee548-f423-43f1-955e-4017e65eb1b4\" (UID: \"8ffee548-f423-43f1-955e-4017e65eb1b4\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540600 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp697\" (UniqueName: \"kubernetes.io/projected/6301e8d9-766f-447e-a721-6fd63dabc5e2-kube-api-access-fp697\") pod \"6301e8d9-766f-447e-a721-6fd63dabc5e2\" (UID: \"6301e8d9-766f-447e-a721-6fd63dabc5e2\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.540632 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da18abb7-6233-4690-acad-41137f3ba686-operator-scripts\") pod \"da18abb7-6233-4690-acad-41137f3ba686\" (UID: \"da18abb7-6233-4690-acad-41137f3ba686\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.541494 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d9ea06-43fe-41a8-b588-178c01182a70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1d9ea06-43fe-41a8-b588-178c01182a70" (UID: "d1d9ea06-43fe-41a8-b588-178c01182a70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.541621 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6301e8d9-766f-447e-a721-6fd63dabc5e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6301e8d9-766f-447e-a721-6fd63dabc5e2" (UID: "6301e8d9-766f-447e-a721-6fd63dabc5e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.541630 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da18abb7-6233-4690-acad-41137f3ba686-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da18abb7-6233-4690-acad-41137f3ba686" (UID: "da18abb7-6233-4690-acad-41137f3ba686"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.541740 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ffee548-f423-43f1-955e-4017e65eb1b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ffee548-f423-43f1-955e-4017e65eb1b4" (UID: "8ffee548-f423-43f1-955e-4017e65eb1b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.545632 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffee548-f423-43f1-955e-4017e65eb1b4-kube-api-access-cjxsg" (OuterVolumeSpecName: "kube-api-access-cjxsg") pod "8ffee548-f423-43f1-955e-4017e65eb1b4" (UID: "8ffee548-f423-43f1-955e-4017e65eb1b4"). InnerVolumeSpecName "kube-api-access-cjxsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.545809 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da18abb7-6233-4690-acad-41137f3ba686-kube-api-access-ptrt6" (OuterVolumeSpecName: "kube-api-access-ptrt6") pod "da18abb7-6233-4690-acad-41137f3ba686" (UID: "da18abb7-6233-4690-acad-41137f3ba686"). InnerVolumeSpecName "kube-api-access-ptrt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.546477 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6301e8d9-766f-447e-a721-6fd63dabc5e2-kube-api-access-fp697" (OuterVolumeSpecName: "kube-api-access-fp697") pod "6301e8d9-766f-447e-a721-6fd63dabc5e2" (UID: "6301e8d9-766f-447e-a721-6fd63dabc5e2"). InnerVolumeSpecName "kube-api-access-fp697". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.549894 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d9ea06-43fe-41a8-b588-178c01182a70-kube-api-access-5qp47" (OuterVolumeSpecName: "kube-api-access-5qp47") pod "d1d9ea06-43fe-41a8-b588-178c01182a70" (UID: "d1d9ea06-43fe-41a8-b588-178c01182a70"). InnerVolumeSpecName "kube-api-access-5qp47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.615725 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-773a-account-create-4dks9" event={"ID":"d1d9ea06-43fe-41a8-b588-178c01182a70","Type":"ContainerDied","Data":"a3db5eac393be328f74130c99fe75f760a2ba783e769e2bf079a84dda3da3ab2"} Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.615785 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3db5eac393be328f74130c99fe75f760a2ba783e769e2bf079a84dda3da3ab2" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.615862 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-773a-account-create-4dks9" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.622767 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89d747df-57hzx"] Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.625589 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a3aa-account-create-bwsrj" event={"ID":"6234ce9d-9669-4dbf-957d-7bfd7158639b","Type":"ContainerDied","Data":"20ef50971b97693102f03345f4745c45d461784d1e25789d7e69b4b2e513e039"} Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.625630 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ef50971b97693102f03345f4745c45d461784d1e25789d7e69b4b2e513e039" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.625688 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a3aa-account-create-bwsrj" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.628024 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-340e-account-create-5vbwn" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.628108 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-340e-account-create-5vbwn" event={"ID":"a2cbf1de-321e-4495-b9ab-b2e4c9758321","Type":"ContainerDied","Data":"92e7c4c27cba440b6106d35ab0fd75980b51e22231f54556d06780bfe5d0fce2"} Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.628132 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e7c4c27cba440b6106d35ab0fd75980b51e22231f54556d06780bfe5d0fce2" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.629845 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-4xttv" event={"ID":"8ffee548-f423-43f1-955e-4017e65eb1b4","Type":"ContainerDied","Data":"08ec8c32b1c80eef440c5c0178856eeb3c321ecc6a57a0314148d0accaee99af"} Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.629874 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08ec8c32b1c80eef440c5c0178856eeb3c321ecc6a57a0314148d0accaee99af" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.629853 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4xttv" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.631306 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j55vj" event={"ID":"da18abb7-6233-4690-acad-41137f3ba686","Type":"ContainerDied","Data":"8ccc87541d3c15a2b9bcd8b36f9843d84e488bfc33e174909cdd801ea0332f46"} Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.631327 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ccc87541d3c15a2b9bcd8b36f9843d84e488bfc33e174909cdd801ea0332f46" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.631371 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j55vj" Nov 23 06:58:39 crc kubenswrapper[4681]: W1123 06:58:39.642742 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9518d64_11d8_4322_a18a_06ba9c7c2824.slice/crio-4eb508fb4a315445ce5387acb70e7104e990c023a0203fd979349e3453ff0f2a WatchSource:0}: Error finding container 4eb508fb4a315445ce5387acb70e7104e990c023a0203fd979349e3453ff0f2a: Status 404 returned error can't find the container with id 4eb508fb4a315445ce5387acb70e7104e990c023a0203fd979349e3453ff0f2a Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.646769 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmxp7\" (UniqueName: \"kubernetes.io/projected/6234ce9d-9669-4dbf-957d-7bfd7158639b-kube-api-access-vmxp7\") pod \"6234ce9d-9669-4dbf-957d-7bfd7158639b\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.646841 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6234ce9d-9669-4dbf-957d-7bfd7158639b-operator-scripts\") pod \"6234ce9d-9669-4dbf-957d-7bfd7158639b\" (UID: \"6234ce9d-9669-4dbf-957d-7bfd7158639b\") " Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647292 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6301e8d9-766f-447e-a721-6fd63dabc5e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647309 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffee548-f423-43f1-955e-4017e65eb1b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647317 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp697\" (UniqueName: \"kubernetes.io/projected/6301e8d9-766f-447e-a721-6fd63dabc5e2-kube-api-access-fp697\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647326 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da18abb7-6233-4690-acad-41137f3ba686-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647334 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptrt6\" (UniqueName: \"kubernetes.io/projected/da18abb7-6233-4690-acad-41137f3ba686-kube-api-access-ptrt6\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647343 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjxsg\" (UniqueName: \"kubernetes.io/projected/8ffee548-f423-43f1-955e-4017e65eb1b4-kube-api-access-cjxsg\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647353 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qp47\" (UniqueName: \"kubernetes.io/projected/d1d9ea06-43fe-41a8-b588-178c01182a70-kube-api-access-5qp47\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.647361 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d9ea06-43fe-41a8-b588-178c01182a70-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.649761 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2zbz" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.649829 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q2zbz" event={"ID":"b6c8c95d-15d6-4b0c-bed1-b49e147f5af9","Type":"ContainerDied","Data":"cf8a1ad23afdca2b770bfb70bb8f320660a362a2a41616317903f45e02af51c3"} Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.649876 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf8a1ad23afdca2b770bfb70bb8f320660a362a2a41616317903f45e02af51c3" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.650500 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6234ce9d-9669-4dbf-957d-7bfd7158639b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6234ce9d-9669-4dbf-957d-7bfd7158639b" (UID: "6234ce9d-9669-4dbf-957d-7bfd7158639b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.654189 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-afdb-account-create-p4xln" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.655407 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6234ce9d-9669-4dbf-957d-7bfd7158639b-kube-api-access-vmxp7" (OuterVolumeSpecName: "kube-api-access-vmxp7") pod "6234ce9d-9669-4dbf-957d-7bfd7158639b" (UID: "6234ce9d-9669-4dbf-957d-7bfd7158639b"). InnerVolumeSpecName "kube-api-access-vmxp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.656067 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-afdb-account-create-p4xln" event={"ID":"6301e8d9-766f-447e-a721-6fd63dabc5e2","Type":"ContainerDied","Data":"06db2686fccd700d7fc11087bacd03e3782559cba9699c8ea3840c8d87d84472"} Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.656109 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06db2686fccd700d7fc11087bacd03e3782559cba9699c8ea3840c8d87d84472" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.750130 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmxp7\" (UniqueName: \"kubernetes.io/projected/6234ce9d-9669-4dbf-957d-7bfd7158639b-kube-api-access-vmxp7\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.750426 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6234ce9d-9669-4dbf-957d-7bfd7158639b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4681]: I1123 06:58:39.951583 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-648ff47655-tp296"] Nov 23 06:58:39 crc kubenswrapper[4681]: W1123 06:58:39.965625 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode22560a2_a6bf_4b36_ad91_e076ad9d5af1.slice/crio-067e53d3901c6f6122d0dc027b2941605cd9c0afd6a91d2503d0162f257766ef WatchSource:0}: Error finding container 067e53d3901c6f6122d0dc027b2941605cd9c0afd6a91d2503d0162f257766ef: Status 404 returned error can't find the container with id 067e53d3901c6f6122d0dc027b2941605cd9c0afd6a91d2503d0162f257766ef Nov 23 06:58:40 crc kubenswrapper[4681]: I1123 06:58:40.665442 4681 generic.go:334] "Generic (PLEG): container finished" podID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerID="23165cff6a447f50443741e56c43ab4dedf68912a1d4e2bec0e2d3b0c2510dd7" exitCode=0 Nov 23 06:58:40 crc kubenswrapper[4681]: I1123 06:58:40.665572 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648ff47655-tp296" event={"ID":"e22560a2-a6bf-4b36-ad91-e076ad9d5af1","Type":"ContainerDied","Data":"23165cff6a447f50443741e56c43ab4dedf68912a1d4e2bec0e2d3b0c2510dd7"} Nov 23 06:58:40 crc kubenswrapper[4681]: I1123 06:58:40.665613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648ff47655-tp296" event={"ID":"e22560a2-a6bf-4b36-ad91-e076ad9d5af1","Type":"ContainerStarted","Data":"067e53d3901c6f6122d0dc027b2941605cd9c0afd6a91d2503d0162f257766ef"} Nov 23 06:58:40 crc kubenswrapper[4681]: I1123 06:58:40.668769 4681 generic.go:334] "Generic (PLEG): container finished" podID="f9518d64-11d8-4322-a18a-06ba9c7c2824" containerID="08a35ee79b05213f1416b333497c5a1ef5bfe516965b49957a48844fbac90304" exitCode=0 Nov 23 06:58:40 crc kubenswrapper[4681]: I1123 06:58:40.668835 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89d747df-57hzx" event={"ID":"f9518d64-11d8-4322-a18a-06ba9c7c2824","Type":"ContainerDied","Data":"08a35ee79b05213f1416b333497c5a1ef5bfe516965b49957a48844fbac90304"} Nov 23 06:58:40 crc kubenswrapper[4681]: I1123 06:58:40.668877 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89d747df-57hzx" event={"ID":"f9518d64-11d8-4322-a18a-06ba9c7c2824","Type":"ContainerStarted","Data":"4eb508fb4a315445ce5387acb70e7104e990c023a0203fd979349e3453ff0f2a"} Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.718307 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89d747df-57hzx" event={"ID":"f9518d64-11d8-4322-a18a-06ba9c7c2824","Type":"ContainerDied","Data":"4eb508fb4a315445ce5387acb70e7104e990c023a0203fd979349e3453ff0f2a"} Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.719483 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb508fb4a315445ce5387acb70e7104e990c023a0203fd979349e3453ff0f2a" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.750590 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.844680 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-dns-svc\") pod \"f9518d64-11d8-4322-a18a-06ba9c7c2824\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.844751 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-config\") pod \"f9518d64-11d8-4322-a18a-06ba9c7c2824\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.844791 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-sb\") pod \"f9518d64-11d8-4322-a18a-06ba9c7c2824\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.844814 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-nb\") pod \"f9518d64-11d8-4322-a18a-06ba9c7c2824\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.844839 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khxkh\" (UniqueName: \"kubernetes.io/projected/f9518d64-11d8-4322-a18a-06ba9c7c2824-kube-api-access-khxkh\") pod \"f9518d64-11d8-4322-a18a-06ba9c7c2824\" (UID: \"f9518d64-11d8-4322-a18a-06ba9c7c2824\") " Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.854767 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9518d64-11d8-4322-a18a-06ba9c7c2824-kube-api-access-khxkh" (OuterVolumeSpecName: "kube-api-access-khxkh") pod "f9518d64-11d8-4322-a18a-06ba9c7c2824" (UID: "f9518d64-11d8-4322-a18a-06ba9c7c2824"). InnerVolumeSpecName "kube-api-access-khxkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.865404 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9518d64-11d8-4322-a18a-06ba9c7c2824" (UID: "f9518d64-11d8-4322-a18a-06ba9c7c2824"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.865725 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-config" (OuterVolumeSpecName: "config") pod "f9518d64-11d8-4322-a18a-06ba9c7c2824" (UID: "f9518d64-11d8-4322-a18a-06ba9c7c2824"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.865751 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9518d64-11d8-4322-a18a-06ba9c7c2824" (UID: "f9518d64-11d8-4322-a18a-06ba9c7c2824"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.867203 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9518d64-11d8-4322-a18a-06ba9c7c2824" (UID: "f9518d64-11d8-4322-a18a-06ba9c7c2824"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.948019 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.948053 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.948067 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.948078 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khxkh\" (UniqueName: \"kubernetes.io/projected/f9518d64-11d8-4322-a18a-06ba9c7c2824-kube-api-access-khxkh\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:43 crc kubenswrapper[4681]: I1123 06:58:43.948089 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9518d64-11d8-4322-a18a-06ba9c7c2824-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.731088 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648ff47655-tp296" event={"ID":"e22560a2-a6bf-4b36-ad91-e076ad9d5af1","Type":"ContainerStarted","Data":"93782f4645aac2cf7b816ac19900d25440eccc9fa393558a84475dac62878b9c"} Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.731243 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.733686 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4hbr9" event={"ID":"7cdb5691-8434-40bd-9103-1ebee6a25d76","Type":"ContainerStarted","Data":"700143e51614cc36002f36973047a5a76880f5c51daf61a8817aff73f0aaa8b6"} Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.733732 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89d747df-57hzx" Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.757536 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-648ff47655-tp296" podStartSLOduration=5.757516484 podStartE2EDuration="5.757516484s" podCreationTimestamp="2025-11-23 06:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:44.750249841 +0000 UTC m=+861.819759078" watchObservedRunningTime="2025-11-23 06:58:44.757516484 +0000 UTC m=+861.827025721" Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.794939 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4hbr9" podStartSLOduration=2.6522102910000003 podStartE2EDuration="9.79491854s" podCreationTimestamp="2025-11-23 06:58:35 +0000 UTC" firstStartedPulling="2025-11-23 06:58:36.490937753 +0000 UTC m=+853.560446989" lastFinishedPulling="2025-11-23 06:58:43.633646011 +0000 UTC m=+860.703155238" observedRunningTime="2025-11-23 06:58:44.776091665 +0000 UTC m=+861.845600902" watchObservedRunningTime="2025-11-23 06:58:44.79491854 +0000 UTC m=+861.864427777" Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.809251 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89d747df-57hzx"] Nov 23 06:58:44 crc kubenswrapper[4681]: I1123 06:58:44.814671 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89d747df-57hzx"] Nov 23 06:58:45 crc kubenswrapper[4681]: I1123 06:58:45.262042 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9518d64-11d8-4322-a18a-06ba9c7c2824" path="/var/lib/kubelet/pods/f9518d64-11d8-4322-a18a-06ba9c7c2824/volumes" Nov 23 06:58:45 crc kubenswrapper[4681]: I1123 06:58:45.746783 4681 generic.go:334] "Generic (PLEG): container finished" podID="7cdb5691-8434-40bd-9103-1ebee6a25d76" containerID="700143e51614cc36002f36973047a5a76880f5c51daf61a8817aff73f0aaa8b6" exitCode=0 Nov 23 06:58:45 crc kubenswrapper[4681]: I1123 06:58:45.746948 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4hbr9" event={"ID":"7cdb5691-8434-40bd-9103-1ebee6a25d76","Type":"ContainerDied","Data":"700143e51614cc36002f36973047a5a76880f5c51daf61a8817aff73f0aaa8b6"} Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.058660 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.207844 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjjnb\" (UniqueName: \"kubernetes.io/projected/7cdb5691-8434-40bd-9103-1ebee6a25d76-kube-api-access-mjjnb\") pod \"7cdb5691-8434-40bd-9103-1ebee6a25d76\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.207945 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-combined-ca-bundle\") pod \"7cdb5691-8434-40bd-9103-1ebee6a25d76\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.208081 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-config-data\") pod \"7cdb5691-8434-40bd-9103-1ebee6a25d76\" (UID: \"7cdb5691-8434-40bd-9103-1ebee6a25d76\") " Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.215181 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdb5691-8434-40bd-9103-1ebee6a25d76-kube-api-access-mjjnb" (OuterVolumeSpecName: "kube-api-access-mjjnb") pod "7cdb5691-8434-40bd-9103-1ebee6a25d76" (UID: "7cdb5691-8434-40bd-9103-1ebee6a25d76"). InnerVolumeSpecName "kube-api-access-mjjnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.233321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cdb5691-8434-40bd-9103-1ebee6a25d76" (UID: "7cdb5691-8434-40bd-9103-1ebee6a25d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.247135 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-config-data" (OuterVolumeSpecName: "config-data") pod "7cdb5691-8434-40bd-9103-1ebee6a25d76" (UID: "7cdb5691-8434-40bd-9103-1ebee6a25d76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.309784 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.309812 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjjnb\" (UniqueName: \"kubernetes.io/projected/7cdb5691-8434-40bd-9103-1ebee6a25d76-kube-api-access-mjjnb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.309824 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdb5691-8434-40bd-9103-1ebee6a25d76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.773605 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4hbr9" event={"ID":"7cdb5691-8434-40bd-9103-1ebee6a25d76","Type":"ContainerDied","Data":"0082545b578e996bd33fbe2177849140cc34e802006b08cebef573c913b55267"} Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.773821 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0082545b578e996bd33fbe2177849140cc34e802006b08cebef573c913b55267" Nov 23 06:58:47 crc kubenswrapper[4681]: I1123 06:58:47.773686 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4hbr9" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.337689 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-648ff47655-tp296"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.338183 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-648ff47655-tp296" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerName="dnsmasq-dns" containerID="cri-o://93782f4645aac2cf7b816ac19900d25440eccc9fa393558a84475dac62878b9c" gracePeriod=10 Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.340652 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393329 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gq2qv"] Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393739 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffee548-f423-43f1-955e-4017e65eb1b4" containerName="mariadb-database-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393758 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffee548-f423-43f1-955e-4017e65eb1b4" containerName="mariadb-database-create" Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393774 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6301e8d9-766f-447e-a721-6fd63dabc5e2" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393781 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6301e8d9-766f-447e-a721-6fd63dabc5e2" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393795 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdb5691-8434-40bd-9103-1ebee6a25d76" containerName="keystone-db-sync" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393801 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdb5691-8434-40bd-9103-1ebee6a25d76" containerName="keystone-db-sync" Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393813 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cbf1de-321e-4495-b9ab-b2e4c9758321" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393819 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cbf1de-321e-4495-b9ab-b2e4c9758321" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393826 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d9ea06-43fe-41a8-b588-178c01182a70" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393831 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d9ea06-43fe-41a8-b588-178c01182a70" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393845 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da18abb7-6233-4690-acad-41137f3ba686" containerName="mariadb-database-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393853 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="da18abb7-6233-4690-acad-41137f3ba686" containerName="mariadb-database-create" Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393871 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9518d64-11d8-4322-a18a-06ba9c7c2824" containerName="init" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393876 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9518d64-11d8-4322-a18a-06ba9c7c2824" containerName="init" Nov 23 06:58:48 crc kubenswrapper[4681]: E1123 06:58:48.393892 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6234ce9d-9669-4dbf-957d-7bfd7158639b" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.393898 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6234ce9d-9669-4dbf-957d-7bfd7158639b" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394071 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffee548-f423-43f1-955e-4017e65eb1b4" containerName="mariadb-database-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394085 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6301e8d9-766f-447e-a721-6fd63dabc5e2" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394097 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6234ce9d-9669-4dbf-957d-7bfd7158639b" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394109 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9518d64-11d8-4322-a18a-06ba9c7c2824" containerName="init" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394116 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdb5691-8434-40bd-9103-1ebee6a25d76" containerName="keystone-db-sync" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394124 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cbf1de-321e-4495-b9ab-b2e4c9758321" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394131 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d9ea06-43fe-41a8-b588-178c01182a70" containerName="mariadb-account-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394141 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="da18abb7-6233-4690-acad-41137f3ba686" containerName="mariadb-database-create" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.394725 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.399696 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.401065 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.402633 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.403647 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k72qg" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.406053 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.460562 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b54fd9f79-p7jnd"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.474203 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.492100 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gq2qv"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.526234 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-fbbdq"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.527548 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529404 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529453 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-scripts\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529509 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwv82\" (UniqueName: \"kubernetes.io/projected/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-kube-api-access-fwv82\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529559 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-svc\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529579 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-combined-ca-bundle\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529644 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5h9\" (UniqueName: \"kubernetes.io/projected/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-kube-api-access-nr5h9\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529717 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-credential-keys\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529757 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-swift-storage-0\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529778 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529796 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-fernet-keys\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529821 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-config\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.529857 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-config-data\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.539803 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.540115 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6jrks" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.540296 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b54fd9f79-p7jnd"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.570521 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-fbbdq"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634052 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-config-data\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634346 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-config-data\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634473 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634551 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-scripts\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634631 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwv82\" (UniqueName: \"kubernetes.io/projected/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-kube-api-access-fwv82\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634725 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-svc\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634800 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-combined-ca-bundle\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.634913 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5h9\" (UniqueName: \"kubernetes.io/projected/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-kube-api-access-nr5h9\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635048 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-credential-keys\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635115 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-combined-ca-bundle\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635208 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-swift-storage-0\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635278 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635347 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-fernet-keys\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635420 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-config\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635510 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426hm\" (UniqueName: \"kubernetes.io/projected/00916d9f-8ce3-47d9-a32f-e2deb3514ede-kube-api-access-426hm\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635568 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.635921 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-svc\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.636288 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-swift-storage-0\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.642077 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.642661 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-config\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.646493 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-config-data\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.646808 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-scripts\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.651938 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-fernet-keys\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.659951 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-combined-ca-bundle\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.661357 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-credential-keys\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.684447 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5h9\" (UniqueName: \"kubernetes.io/projected/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-kube-api-access-nr5h9\") pod \"keystone-bootstrap-gq2qv\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.702135 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwv82\" (UniqueName: \"kubernetes.io/projected/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-kube-api-access-fwv82\") pod \"dnsmasq-dns-5b54fd9f79-p7jnd\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.708301 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6994f59557-zb5qf"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.710849 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.716308 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.727994 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.728291 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.728420 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.728548 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vd47h" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.741600 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-combined-ca-bundle\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.741658 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426hm\" (UniqueName: \"kubernetes.io/projected/00916d9f-8ce3-47d9-a32f-e2deb3514ede-kube-api-access-426hm\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.741680 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-config-data\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.748305 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-config-data\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.750990 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-combined-ca-bundle\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.784108 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6994f59557-zb5qf"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.821451 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4gs5w"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.823190 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.853391 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-scripts\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.853487 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29h29\" (UniqueName: \"kubernetes.io/projected/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-kube-api-access-29h29\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.853576 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-logs\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.853665 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-horizon-secret-key\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.853698 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-config-data\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.854307 4681 generic.go:334] "Generic (PLEG): container finished" podID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerID="93782f4645aac2cf7b816ac19900d25440eccc9fa393558a84475dac62878b9c" exitCode=0 Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.854343 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648ff47655-tp296" event={"ID":"e22560a2-a6bf-4b36-ad91-e076ad9d5af1","Type":"ContainerDied","Data":"93782f4645aac2cf7b816ac19900d25440eccc9fa393558a84475dac62878b9c"} Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.862140 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.877932 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.879108 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sp47" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.885963 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.895140 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426hm\" (UniqueName: \"kubernetes.io/projected/00916d9f-8ce3-47d9-a32f-e2deb3514ede-kube-api-access-426hm\") pod \"heat-db-sync-fbbdq\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.986320 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4gs5w"] Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995576 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-combined-ca-bundle\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995626 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-logs\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995650 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwbq9\" (UniqueName: \"kubernetes.io/projected/d426ed81-18f9-441e-9865-b9a6d683931f-kube-api-access-fwbq9\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995681 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-config-data\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995709 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-db-sync-config-data\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995739 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-horizon-secret-key\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995761 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-config-data\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995798 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-scripts\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995848 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d426ed81-18f9-441e-9865-b9a6d683931f-etc-machine-id\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995886 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-scripts\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.995919 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29h29\" (UniqueName: \"kubernetes.io/projected/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-kube-api-access-29h29\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.996517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-logs\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:48 crc kubenswrapper[4681]: I1123 06:58:48.998408 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-scripts\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:48.999149 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-config-data\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.009629 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-horizon-secret-key\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.074189 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29h29\" (UniqueName: \"kubernetes.io/projected/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-kube-api-access-29h29\") pod \"horizon-6994f59557-zb5qf\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.083897 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-xbhpv"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.088514 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.099083 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-combined-ca-bundle\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.099130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwbq9\" (UniqueName: \"kubernetes.io/projected/d426ed81-18f9-441e-9865-b9a6d683931f-kube-api-access-fwbq9\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.099181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-config-data\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.099985 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-db-sync-config-data\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.100077 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-scripts\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.100152 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d426ed81-18f9-441e-9865-b9a6d683931f-etc-machine-id\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.100275 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d426ed81-18f9-441e-9865-b9a6d683931f-etc-machine-id\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.113593 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-combined-ca-bundle\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.126426 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xbhpv"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.134364 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.142333 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-config-data\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.142984 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qn8qf"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.145151 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.155189 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.155385 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.155601 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sc6p8" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.158601 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-db-sync-config-data\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.159542 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-scripts\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.181438 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-fbbdq" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.181980 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.182261 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.182352 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6dmm8" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201043 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-config\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201084 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlm4f\" (UniqueName: \"kubernetes.io/projected/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-kube-api-access-qlm4f\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201109 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-config-data\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201133 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-combined-ca-bundle\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201155 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-scripts\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201196 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d7fj\" (UniqueName: \"kubernetes.io/projected/31fd09f2-734b-4427-8b5b-65711b24bbb5-kube-api-access-2d7fj\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201227 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-combined-ca-bundle\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.201241 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31fd09f2-734b-4427-8b5b-65711b24bbb5-logs\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.239292 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwbq9\" (UniqueName: \"kubernetes.io/projected/d426ed81-18f9-441e-9865-b9a6d683931f-kube-api-access-fwbq9\") pod \"cinder-db-sync-4gs5w\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.249767 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c5444c6b5-7cd6d"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.253644 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.297418 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304233 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-config\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304294 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlm4f\" (UniqueName: \"kubernetes.io/projected/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-kube-api-access-qlm4f\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304321 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-config-data\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304358 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-combined-ca-bundle\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304386 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-scripts\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d7fj\" (UniqueName: \"kubernetes.io/projected/31fd09f2-734b-4427-8b5b-65711b24bbb5-kube-api-access-2d7fj\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304531 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-combined-ca-bundle\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.304553 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31fd09f2-734b-4427-8b5b-65711b24bbb5-logs\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.317702 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31fd09f2-734b-4427-8b5b-65711b24bbb5-logs\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.333661 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qn8qf"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.333808 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.340088 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-config\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.350991 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.359182 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-combined-ca-bundle\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.360000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-scripts\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.360314 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-combined-ca-bundle\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.360879 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-config-data\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.369662 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.373567 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.409418 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d7fj\" (UniqueName: \"kubernetes.io/projected/31fd09f2-734b-4427-8b5b-65711b24bbb5-kube-api-access-2d7fj\") pod \"placement-db-sync-qn8qf\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.413553 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-frn6w"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.414772 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.419702 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l56nd" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.424515 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203e0f9e-791d-4b8e-9521-b7b334fcacf6-horizon-secret-key\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.424626 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbv6b\" (UniqueName: \"kubernetes.io/projected/203e0f9e-791d-4b8e-9521-b7b334fcacf6-kube-api-access-dbv6b\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.424876 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-scripts\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.424924 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203e0f9e-791d-4b8e-9521-b7b334fcacf6-logs\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.424974 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-config-data\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.434983 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.438029 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlm4f\" (UniqueName: \"kubernetes.io/projected/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-kube-api-access-qlm4f\") pod \"neutron-db-sync-xbhpv\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.446855 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5444c6b5-7cd6d"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.470987 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.477773 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b54fd9f79-p7jnd"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.478331 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.499525 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-frn6w"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.514909 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qn8qf" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528417 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528453 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-scripts\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528498 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528517 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-run-httpd\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528557 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2gxn\" (UniqueName: \"kubernetes.io/projected/2483649a-baa7-4c82-92d5-b3e2aff97ab2-kube-api-access-m2gxn\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528582 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-scripts\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528606 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203e0f9e-791d-4b8e-9521-b7b334fcacf6-logs\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528626 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-log-httpd\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528647 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-config-data\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528683 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-combined-ca-bundle\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528870 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203e0f9e-791d-4b8e-9521-b7b334fcacf6-horizon-secret-key\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528949 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-db-sync-config-data\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.528999 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbv6b\" (UniqueName: \"kubernetes.io/projected/203e0f9e-791d-4b8e-9521-b7b334fcacf6-kube-api-access-dbv6b\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.529031 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-config-data\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.529073 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjdl9\" (UniqueName: \"kubernetes.io/projected/95e9b025-0fa7-4a41-a18c-e4f078b82c43-kube-api-access-jjdl9\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.543590 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203e0f9e-791d-4b8e-9521-b7b334fcacf6-horizon-secret-key\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.550969 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-scripts\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.551869 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-config-data\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.555117 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.556629 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.562779 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203e0f9e-791d-4b8e-9521-b7b334fcacf6-logs\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.628523 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.633621 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-log-httpd\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.633715 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-combined-ca-bundle\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.633808 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-db-sync-config-data\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.633851 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-config-data\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.633884 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjdl9\" (UniqueName: \"kubernetes.io/projected/95e9b025-0fa7-4a41-a18c-e4f078b82c43-kube-api-access-jjdl9\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.633941 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.633974 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-scripts\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.634014 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.634037 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-run-httpd\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.634057 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2gxn\" (UniqueName: \"kubernetes.io/projected/2483649a-baa7-4c82-92d5-b3e2aff97ab2-kube-api-access-m2gxn\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.634219 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-log-httpd\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.655534 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-scripts\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.658235 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.658436 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.658867 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r52k4" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.658995 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.663533 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbv6b\" (UniqueName: \"kubernetes.io/projected/203e0f9e-791d-4b8e-9521-b7b334fcacf6-kube-api-access-dbv6b\") pod \"horizon-6c5444c6b5-7cd6d\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.675708 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-run-httpd\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.683913 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d78ff46f5-xfmdq"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.687405 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.694699 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjdl9\" (UniqueName: \"kubernetes.io/projected/95e9b025-0fa7-4a41-a18c-e4f078b82c43-kube-api-access-jjdl9\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.695234 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.696966 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d78ff46f5-xfmdq"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.699993 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2gxn\" (UniqueName: \"kubernetes.io/projected/2483649a-baa7-4c82-92d5-b3e2aff97ab2-kube-api-access-m2gxn\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.717845 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.718669 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.726589 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-config-data\") pod \"ceilometer-0\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.728061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-combined-ca-bundle\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.736874 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-db-sync-config-data\") pod \"barbican-db-sync-frn6w\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.737521 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-swift-storage-0\") pod \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.737573 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-svc\") pod \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.737684 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-config\") pod \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.737751 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-sb\") pod \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.737768 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-nb\") pod \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.737813 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b277k\" (UniqueName: \"kubernetes.io/projected/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-kube-api-access-b277k\") pod \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\" (UID: \"e22560a2-a6bf-4b36-ad91-e076ad9d5af1\") " Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.738751 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.738857 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.738928 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-scripts\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.738984 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.739128 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-config-data\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.739233 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-logs\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.739371 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.739435 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgl74\" (UniqueName: \"kubernetes.io/projected/550d99c6-05a8-4019-b949-d8e57a7fefc5-kube-api-access-hgl74\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.764915 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-kube-api-access-b277k" (OuterVolumeSpecName: "kube-api-access-b277k") pod "e22560a2-a6bf-4b36-ad91-e076ad9d5af1" (UID: "e22560a2-a6bf-4b36-ad91-e076ad9d5af1"). InnerVolumeSpecName "kube-api-access-b277k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.841451 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-logs\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.842247 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.842341 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgl74\" (UniqueName: \"kubernetes.io/projected/550d99c6-05a8-4019-b949-d8e57a7fefc5-kube-api-access-hgl74\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.842955 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.843032 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.843070 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-scripts\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.843109 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.843216 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-config-data\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.853248 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.843310 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b277k\" (UniqueName: \"kubernetes.io/projected/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-kube-api-access-b277k\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.869861 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.870513 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-logs\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.892536 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gq2qv"] Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.892949 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.901789 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-frn6w" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.914152 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-scripts\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.914215 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e22560a2-a6bf-4b36-ad91-e076ad9d5af1" (UID: "e22560a2-a6bf-4b36-ad91-e076ad9d5af1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.914665 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.915333 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-config-data\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.939692 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:49 crc kubenswrapper[4681]: E1123 06:58:49.940134 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerName="init" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.940145 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerName="init" Nov 23 06:58:49 crc kubenswrapper[4681]: E1123 06:58:49.940194 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerName="dnsmasq-dns" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.940199 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerName="dnsmasq-dns" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.940387 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerName="dnsmasq-dns" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.947542 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.952916 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.988297 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.991772 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.992678 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgl74\" (UniqueName: \"kubernetes.io/projected/550d99c6-05a8-4019-b949-d8e57a7fefc5-kube-api-access-hgl74\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:49 crc kubenswrapper[4681]: I1123 06:58:49.998384 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.001398 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648ff47655-tp296" event={"ID":"e22560a2-a6bf-4b36-ad91-e076ad9d5af1","Type":"ContainerDied","Data":"067e53d3901c6f6122d0dc027b2941605cd9c0afd6a91d2503d0162f257766ef"} Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.001480 4681 scope.go:117] "RemoveContainer" containerID="93782f4645aac2cf7b816ac19900d25440eccc9fa393558a84475dac62878b9c" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.001638 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648ff47655-tp296" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.024066 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-nb\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.024160 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-swift-storage-0\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.024312 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-sb\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.024372 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-config\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.024413 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-svc\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.024441 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pt2\" (UniqueName: \"kubernetes.io/projected/8b9d5ea3-e589-4578-b37b-59e1690b4d34-kube-api-access-56pt2\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.026741 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.049598 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.117341 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.119105 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e22560a2-a6bf-4b36-ad91-e076ad9d5af1" (UID: "e22560a2-a6bf-4b36-ad91-e076ad9d5af1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.135952 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-config\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136011 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136033 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136065 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-svc\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136084 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56pt2\" (UniqueName: \"kubernetes.io/projected/8b9d5ea3-e589-4578-b37b-59e1690b4d34-kube-api-access-56pt2\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136209 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136231 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136264 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136300 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136341 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-nb\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136366 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136395 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtj9\" (UniqueName: \"kubernetes.io/projected/a2f274b0-10d6-4bbb-bb77-882ad008b40e-kube-api-access-zqtj9\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.136444 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-swift-storage-0\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.137236 4681 scope.go:117] "RemoveContainer" containerID="23165cff6a447f50443741e56c43ab4dedf68912a1d4e2bec0e2d3b0c2510dd7" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.140142 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-sb\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.141091 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-sb\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.146046 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-nb\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.146780 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-config\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.146815 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.147302 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-svc\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.147686 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-swift-storage-0\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.197113 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e22560a2-a6bf-4b36-ad91-e076ad9d5af1" (UID: "e22560a2-a6bf-4b36-ad91-e076ad9d5af1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.197274 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56pt2\" (UniqueName: \"kubernetes.io/projected/8b9d5ea3-e589-4578-b37b-59e1690b4d34-kube-api-access-56pt2\") pod \"dnsmasq-dns-d78ff46f5-xfmdq\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.206613 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-config" (OuterVolumeSpecName: "config") pod "e22560a2-a6bf-4b36-ad91-e076ad9d5af1" (UID: "e22560a2-a6bf-4b36-ad91-e076ad9d5af1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.212860 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e22560a2-a6bf-4b36-ad91-e076ad9d5af1" (UID: "e22560a2-a6bf-4b36-ad91-e076ad9d5af1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.230767 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b54fd9f79-p7jnd"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249319 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249355 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249405 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249421 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249440 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249616 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249652 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249680 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtj9\" (UniqueName: \"kubernetes.io/projected/a2f274b0-10d6-4bbb-bb77-882ad008b40e-kube-api-access-zqtj9\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249757 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249767 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.249779 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e22560a2-a6bf-4b36-ad91-e076ad9d5af1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.250133 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.258873 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.259312 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.254856 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.264080 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.264659 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.272417 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.279706 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtj9\" (UniqueName: \"kubernetes.io/projected/a2f274b0-10d6-4bbb-bb77-882ad008b40e-kube-api-access-zqtj9\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.284872 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.315116 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.393149 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.404118 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.409534 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-648ff47655-tp296"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.419951 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-648ff47655-tp296"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.429567 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-fbbdq"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.466862 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6994f59557-zb5qf"] Nov 23 06:58:50 crc kubenswrapper[4681]: W1123 06:58:50.473627 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45b8faad_ff9c_4acb_bcf3_9b1efccbce7d.slice/crio-4eb84936d1e1012d016667b08a4189250a57ffb6a3d41449279ef1985c71dbb5 WatchSource:0}: Error finding container 4eb84936d1e1012d016667b08a4189250a57ffb6a3d41449279ef1985c71dbb5: Status 404 returned error can't find the container with id 4eb84936d1e1012d016667b08a4189250a57ffb6a3d41449279ef1985c71dbb5 Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.869381 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4gs5w"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.886682 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xbhpv"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.898437 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qn8qf"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.987446 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 06:58:50 crc kubenswrapper[4681]: I1123 06:58:50.993276 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5444c6b5-7cd6d"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.028603 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6994f59557-zb5qf" event={"ID":"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d","Type":"ContainerStarted","Data":"4eb84936d1e1012d016667b08a4189250a57ffb6a3d41449279ef1985c71dbb5"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.031115 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gq2qv" event={"ID":"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c","Type":"ContainerStarted","Data":"4eb819562924a436a5ab39f168eacb4ecf88a1bec5f4d4b4f5c623c6df3e83a5"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.031216 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gq2qv" event={"ID":"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c","Type":"ContainerStarted","Data":"0bd0bfdf7aaab7bbac5a913dead68629324a60bd707fad384f9f55229e97ac35"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.050872 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gq2qv" podStartSLOduration=3.050860665 podStartE2EDuration="3.050860665s" podCreationTimestamp="2025-11-23 06:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:51.046554743 +0000 UTC m=+868.116063980" watchObservedRunningTime="2025-11-23 06:58:51.050860665 +0000 UTC m=+868.120369903" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.052638 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xbhpv" event={"ID":"4cc57e44-7957-4d3a-b9c9-2da622ea38a0","Type":"ContainerStarted","Data":"0d3c7b5fb4cdb8cd50b69b165da0743b00612c930e385e0aac977a4dd13367d3"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.058653 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerStarted","Data":"32f56c1417d6210127e2cf39c10f743cc7ab6427cd933fde74c872b1db3e1ae0"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.067856 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-fbbdq" event={"ID":"00916d9f-8ce3-47d9-a32f-e2deb3514ede","Type":"ContainerStarted","Data":"a5206e169df0b5439eb3755e864da8c406e02bbc49affcc1a40636aeb5c6d317"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.071996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qn8qf" event={"ID":"31fd09f2-734b-4427-8b5b-65711b24bbb5","Type":"ContainerStarted","Data":"8eb2cbd5eb4dd21a23f6354b3f6c5ac0e4abb7eb7531c4a4fbdf907c6d452f31"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.076614 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4gs5w" event={"ID":"d426ed81-18f9-441e-9865-b9a6d683931f","Type":"ContainerStarted","Data":"16ccf63dba1b72e27708d2c7e53ebe8a0d06980eb2c85ea18a2cce90e526d58f"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.077759 4681 generic.go:334] "Generic (PLEG): container finished" podID="4bca3e48-0181-45c8-a8ba-ae25e4a64db2" containerID="a8ba8680c5a5641e59bd814e31283b253e5a16d6cc968b6f893a62d8dc80647e" exitCode=0 Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.077795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" event={"ID":"4bca3e48-0181-45c8-a8ba-ae25e4a64db2","Type":"ContainerDied","Data":"a8ba8680c5a5641e59bd814e31283b253e5a16d6cc968b6f893a62d8dc80647e"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.077820 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" event={"ID":"4bca3e48-0181-45c8-a8ba-ae25e4a64db2","Type":"ContainerStarted","Data":"ee1757f8cf777e901d35a194065097e315c050473c14c7f8365a4b75ee89af2f"} Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.104686 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-frn6w"] Nov 23 06:58:51 crc kubenswrapper[4681]: W1123 06:58:51.144734 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95e9b025_0fa7_4a41_a18c_e4f078b82c43.slice/crio-f3950d753520416d0adbfcd1dc0bbf7e068d5ffea5ef72d1e591d0ea0e41476e WatchSource:0}: Error finding container f3950d753520416d0adbfcd1dc0bbf7e068d5ffea5ef72d1e591d0ea0e41476e: Status 404 returned error can't find the container with id f3950d753520416d0adbfcd1dc0bbf7e068d5ffea5ef72d1e591d0ea0e41476e Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.267235 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" path="/var/lib/kubelet/pods/e22560a2-a6bf-4b36-ad91-e076ad9d5af1/volumes" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.283136 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.305245 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d78ff46f5-xfmdq"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.437304 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.586292 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.609160 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-svc\") pod \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.609320 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-config\") pod \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.609347 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-sb\") pod \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.609509 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwv82\" (UniqueName: \"kubernetes.io/projected/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-kube-api-access-fwv82\") pod \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.609596 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-swift-storage-0\") pod \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.609635 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-nb\") pod \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\" (UID: \"4bca3e48-0181-45c8-a8ba-ae25e4a64db2\") " Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.685905 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-kube-api-access-fwv82" (OuterVolumeSpecName: "kube-api-access-fwv82") pod "4bca3e48-0181-45c8-a8ba-ae25e4a64db2" (UID: "4bca3e48-0181-45c8-a8ba-ae25e4a64db2"). InnerVolumeSpecName "kube-api-access-fwv82". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.742830 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwv82\" (UniqueName: \"kubernetes.io/projected/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-kube-api-access-fwv82\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.744423 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.796227 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bca3e48-0181-45c8-a8ba-ae25e4a64db2" (UID: "4bca3e48-0181-45c8-a8ba-ae25e4a64db2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.819645 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6994f59557-zb5qf"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.825040 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bca3e48-0181-45c8-a8ba-ae25e4a64db2" (UID: "4bca3e48-0181-45c8-a8ba-ae25e4a64db2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.841065 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bca3e48-0181-45c8-a8ba-ae25e4a64db2" (UID: "4bca3e48-0181-45c8-a8ba-ae25e4a64db2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.846432 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.849214 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-config" (OuterVolumeSpecName: "config") pod "4bca3e48-0181-45c8-a8ba-ae25e4a64db2" (UID: "4bca3e48-0181-45c8-a8ba-ae25e4a64db2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.855590 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.855628 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.868089 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bca3e48-0181-45c8-a8ba-ae25e4a64db2" (UID: "4bca3e48-0181-45c8-a8ba-ae25e4a64db2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.914786 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-845ccd5479-79qz5"] Nov 23 06:58:51 crc kubenswrapper[4681]: E1123 06:58:51.915322 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bca3e48-0181-45c8-a8ba-ae25e4a64db2" containerName="init" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.915337 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bca3e48-0181-45c8-a8ba-ae25e4a64db2" containerName="init" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.915529 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bca3e48-0181-45c8-a8ba-ae25e4a64db2" containerName="init" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.920434 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.940902 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-845ccd5479-79qz5"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.959333 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.960972 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.960988 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bca3e48-0181-45c8-a8ba-ae25e4a64db2-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:51 crc kubenswrapper[4681]: I1123 06:58:51.975359 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.063430 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-config-data\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.063540 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-horizon-secret-key\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.063597 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gdl\" (UniqueName: \"kubernetes.io/projected/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-kube-api-access-d4gdl\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.063612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-logs\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.063666 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-scripts\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.123722 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" event={"ID":"4bca3e48-0181-45c8-a8ba-ae25e4a64db2","Type":"ContainerDied","Data":"ee1757f8cf777e901d35a194065097e315c050473c14c7f8365a4b75ee89af2f"} Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.123770 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b54fd9f79-p7jnd" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.123801 4681 scope.go:117] "RemoveContainer" containerID="a8ba8680c5a5641e59bd814e31283b253e5a16d6cc968b6f893a62d8dc80647e" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.135697 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5444c6b5-7cd6d" event={"ID":"203e0f9e-791d-4b8e-9521-b7b334fcacf6","Type":"ContainerStarted","Data":"fcc8fe4e140585eedac6743672fc9be32f8092fbbeb64d793e13d4db5da135e8"} Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.155752 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2f274b0-10d6-4bbb-bb77-882ad008b40e","Type":"ContainerStarted","Data":"93743c76a626d0a7e31e061808341deda1ee248af6e6b08e0b748a0aad075d29"} Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.157452 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-frn6w" event={"ID":"95e9b025-0fa7-4a41-a18c-e4f078b82c43","Type":"ContainerStarted","Data":"f3950d753520416d0adbfcd1dc0bbf7e068d5ffea5ef72d1e591d0ea0e41476e"} Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.165333 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b54fd9f79-p7jnd"] Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.165483 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4gdl\" (UniqueName: \"kubernetes.io/projected/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-kube-api-access-d4gdl\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.165522 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-logs\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.165611 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-scripts\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.165648 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-config-data\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.165739 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-horizon-secret-key\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.170774 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-logs\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.170983 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-scripts\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.172000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-horizon-secret-key\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.172751 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-config-data\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.173413 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" event={"ID":"8b9d5ea3-e589-4578-b37b-59e1690b4d34","Type":"ContainerStarted","Data":"833f645e7cbf56efda1024f8b536df1f35393fcddcc6916607271a6b8be465de"} Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.188766 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b54fd9f79-p7jnd"] Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.195162 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4gdl\" (UniqueName: \"kubernetes.io/projected/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-kube-api-access-d4gdl\") pod \"horizon-845ccd5479-79qz5\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.272008 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.275782 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"550d99c6-05a8-4019-b949-d8e57a7fefc5","Type":"ContainerStarted","Data":"2d2429ce23a1ad526ddbd9615496236b76fd883a15eace3dade86c23693d04b7"} Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.280542 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xbhpv" event={"ID":"4cc57e44-7957-4d3a-b9c9-2da622ea38a0","Type":"ContainerStarted","Data":"3295b4bd261ee97327198f543bfa8e15d7d22cf8363d391dcfb4f63e8553275a"} Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.302802 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-xbhpv" podStartSLOduration=4.302750874 podStartE2EDuration="4.302750874s" podCreationTimestamp="2025-11-23 06:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:52.291694339 +0000 UTC m=+869.361203577" watchObservedRunningTime="2025-11-23 06:58:52.302750874 +0000 UTC m=+869.372260111" Nov 23 06:58:52 crc kubenswrapper[4681]: I1123 06:58:52.937439 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-845ccd5479-79qz5"] Nov 23 06:58:52 crc kubenswrapper[4681]: W1123 06:58:52.973880 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f95ab62_e0ad_4566_bbfd_29e2ad374edf.slice/crio-5cb4a2c6f0570027854057f2b06d2920ca8e624ffa1718599aff5111048ed630 WatchSource:0}: Error finding container 5cb4a2c6f0570027854057f2b06d2920ca8e624ffa1718599aff5111048ed630: Status 404 returned error can't find the container with id 5cb4a2c6f0570027854057f2b06d2920ca8e624ffa1718599aff5111048ed630 Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.269580 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bca3e48-0181-45c8-a8ba-ae25e4a64db2" path="/var/lib/kubelet/pods/4bca3e48-0181-45c8-a8ba-ae25e4a64db2/volumes" Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.315378 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845ccd5479-79qz5" event={"ID":"2f95ab62-e0ad-4566-bbfd-29e2ad374edf","Type":"ContainerStarted","Data":"5cb4a2c6f0570027854057f2b06d2920ca8e624ffa1718599aff5111048ed630"} Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.353388 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2f274b0-10d6-4bbb-bb77-882ad008b40e","Type":"ContainerStarted","Data":"c88980b76a1eef7772698b29fb054eea468341260222451c6805e1d0f4a9313d"} Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.365572 4681 generic.go:334] "Generic (PLEG): container finished" podID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerID="428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab" exitCode=0 Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.366372 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" event={"ID":"8b9d5ea3-e589-4578-b37b-59e1690b4d34","Type":"ContainerDied","Data":"428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab"} Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.366409 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.366421 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" event={"ID":"8b9d5ea3-e589-4578-b37b-59e1690b4d34","Type":"ContainerStarted","Data":"8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819"} Nov 23 06:58:53 crc kubenswrapper[4681]: I1123 06:58:53.474971 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" podStartSLOduration=4.474804276 podStartE2EDuration="4.474804276s" podCreationTimestamp="2025-11-23 06:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:53.46860781 +0000 UTC m=+870.538117047" watchObservedRunningTime="2025-11-23 06:58:53.474804276 +0000 UTC m=+870.544313513" Nov 23 06:58:54 crc kubenswrapper[4681]: I1123 06:58:54.391948 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"550d99c6-05a8-4019-b949-d8e57a7fefc5","Type":"ContainerStarted","Data":"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440"} Nov 23 06:58:54 crc kubenswrapper[4681]: I1123 06:58:54.414315 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-648ff47655-tp296" podUID="e22560a2-a6bf-4b36-ad91-e076ad9d5af1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.409319 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"550d99c6-05a8-4019-b949-d8e57a7fefc5","Type":"ContainerStarted","Data":"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851"} Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.409413 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-log" containerID="cri-o://8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440" gracePeriod=30 Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.409814 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-httpd" containerID="cri-o://dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851" gracePeriod=30 Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.436604 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.436585625 podStartE2EDuration="6.436585625s" podCreationTimestamp="2025-11-23 06:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:55.427721282 +0000 UTC m=+872.497230519" watchObservedRunningTime="2025-11-23 06:58:55.436585625 +0000 UTC m=+872.506094862" Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.439982 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gq2qv" event={"ID":"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c","Type":"ContainerDied","Data":"4eb819562924a436a5ab39f168eacb4ecf88a1bec5f4d4b4f5c623c6df3e83a5"} Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.440114 4681 generic.go:334] "Generic (PLEG): container finished" podID="8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" containerID="4eb819562924a436a5ab39f168eacb4ecf88a1bec5f4d4b4f5c623c6df3e83a5" exitCode=0 Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.444552 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2f274b0-10d6-4bbb-bb77-882ad008b40e","Type":"ContainerStarted","Data":"da00007d6618b2272ce96b3e82bd0a97423326c7b1890c79a7e489e978a75f0f"} Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.444675 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-log" containerID="cri-o://c88980b76a1eef7772698b29fb054eea468341260222451c6805e1d0f4a9313d" gracePeriod=30 Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.444822 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-httpd" containerID="cri-o://da00007d6618b2272ce96b3e82bd0a97423326c7b1890c79a7e489e978a75f0f" gracePeriod=30 Nov 23 06:58:55 crc kubenswrapper[4681]: I1123 06:58:55.492859 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.492823523 podStartE2EDuration="6.492823523s" podCreationTimestamp="2025-11-23 06:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:55.482258736 +0000 UTC m=+872.551767973" watchObservedRunningTime="2025-11-23 06:58:55.492823523 +0000 UTC m=+872.562332759" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.223157 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.326468 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgl74\" (UniqueName: \"kubernetes.io/projected/550d99c6-05a8-4019-b949-d8e57a7fefc5-kube-api-access-hgl74\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.326542 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-combined-ca-bundle\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.326595 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-config-data\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.326649 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-logs\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.326670 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-scripts\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.327038 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.327187 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-public-tls-certs\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.327262 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-httpd-run\") pod \"550d99c6-05a8-4019-b949-d8e57a7fefc5\" (UID: \"550d99c6-05a8-4019-b949-d8e57a7fefc5\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.329953 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-logs" (OuterVolumeSpecName: "logs") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.332085 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.336540 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.339734 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-scripts" (OuterVolumeSpecName: "scripts") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.356253 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550d99c6-05a8-4019-b949-d8e57a7fefc5-kube-api-access-hgl74" (OuterVolumeSpecName: "kube-api-access-hgl74") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "kube-api-access-hgl74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.392718 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.404621 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-config-data" (OuterVolumeSpecName: "config-data") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.423263 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "550d99c6-05a8-4019-b949-d8e57a7fefc5" (UID: "550d99c6-05a8-4019-b949-d8e57a7fefc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.451938 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgl74\" (UniqueName: \"kubernetes.io/projected/550d99c6-05a8-4019-b949-d8e57a7fefc5-kube-api-access-hgl74\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.451975 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.451988 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.451997 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.452009 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.452033 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.452043 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550d99c6-05a8-4019-b949-d8e57a7fefc5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.452055 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/550d99c6-05a8-4019-b949-d8e57a7fefc5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.494430 4681 generic.go:334] "Generic (PLEG): container finished" podID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerID="da00007d6618b2272ce96b3e82bd0a97423326c7b1890c79a7e489e978a75f0f" exitCode=0 Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.494488 4681 generic.go:334] "Generic (PLEG): container finished" podID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerID="c88980b76a1eef7772698b29fb054eea468341260222451c6805e1d0f4a9313d" exitCode=143 Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.494562 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2f274b0-10d6-4bbb-bb77-882ad008b40e","Type":"ContainerDied","Data":"da00007d6618b2272ce96b3e82bd0a97423326c7b1890c79a7e489e978a75f0f"} Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.494603 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2f274b0-10d6-4bbb-bb77-882ad008b40e","Type":"ContainerDied","Data":"c88980b76a1eef7772698b29fb054eea468341260222451c6805e1d0f4a9313d"} Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.515530 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.522588 4681 generic.go:334] "Generic (PLEG): container finished" podID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerID="dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851" exitCode=143 Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.522619 4681 generic.go:334] "Generic (PLEG): container finished" podID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerID="8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440" exitCode=143 Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.522828 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.522985 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"550d99c6-05a8-4019-b949-d8e57a7fefc5","Type":"ContainerDied","Data":"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851"} Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.523047 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"550d99c6-05a8-4019-b949-d8e57a7fefc5","Type":"ContainerDied","Data":"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440"} Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.523060 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"550d99c6-05a8-4019-b949-d8e57a7fefc5","Type":"ContainerDied","Data":"2d2429ce23a1ad526ddbd9615496236b76fd883a15eace3dade86c23693d04b7"} Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.523082 4681 scope.go:117] "RemoveContainer" containerID="dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.553452 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.628604 4681 scope.go:117] "RemoveContainer" containerID="8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.650373 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.676323 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.700690 4681 scope.go:117] "RemoveContainer" containerID="dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851" Nov 23 06:58:56 crc kubenswrapper[4681]: E1123 06:58:56.701947 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851\": container with ID starting with dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851 not found: ID does not exist" containerID="dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.701987 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851"} err="failed to get container status \"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851\": rpc error: code = NotFound desc = could not find container \"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851\": container with ID starting with dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851 not found: ID does not exist" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.702014 4681 scope.go:117] "RemoveContainer" containerID="8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440" Nov 23 06:58:56 crc kubenswrapper[4681]: E1123 06:58:56.703448 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440\": container with ID starting with 8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440 not found: ID does not exist" containerID="8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.703491 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440"} err="failed to get container status \"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440\": rpc error: code = NotFound desc = could not find container \"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440\": container with ID starting with 8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440 not found: ID does not exist" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.703508 4681 scope.go:117] "RemoveContainer" containerID="dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.704241 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851"} err="failed to get container status \"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851\": rpc error: code = NotFound desc = could not find container \"dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851\": container with ID starting with dad83ca96951caed1358af3b8ae49e4010bf481155f4ebcc1a9ae72e8d87d851 not found: ID does not exist" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.704277 4681 scope.go:117] "RemoveContainer" containerID="8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.704599 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440"} err="failed to get container status \"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440\": rpc error: code = NotFound desc = could not find container \"8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440\": container with ID starting with 8727c34f8e59ad582e61222fe6298041d0f80adc9294caa4a7222c7064888440 not found: ID does not exist" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.730540 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:56 crc kubenswrapper[4681]: E1123 06:58:56.730970 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-httpd" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.730988 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-httpd" Nov 23 06:58:56 crc kubenswrapper[4681]: E1123 06:58:56.731001 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-log" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.731008 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-log" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.731212 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-log" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.731238 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" containerName="glance-httpd" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.733285 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.736208 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.736385 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.743810 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860224 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860265 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860306 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860393 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860418 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860490 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xq9g\" (UniqueName: \"kubernetes.io/projected/1a778946-8c19-4d5e-9071-d754b449dccc-kube-api-access-4xq9g\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860526 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-logs\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.860547 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.906028 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.962369 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-combined-ca-bundle\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.962721 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-internal-tls-certs\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.962783 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-config-data\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.962846 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-httpd-run\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.962948 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-scripts\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963013 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-logs\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963082 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqtj9\" (UniqueName: \"kubernetes.io/projected/a2f274b0-10d6-4bbb-bb77-882ad008b40e-kube-api-access-zqtj9\") pod \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\" (UID: \"a2f274b0-10d6-4bbb-bb77-882ad008b40e\") " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963619 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963647 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963695 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963774 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.963795 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.964297 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.964556 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-logs" (OuterVolumeSpecName: "logs") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.966442 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.969995 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xq9g\" (UniqueName: \"kubernetes.io/projected/1a778946-8c19-4d5e-9071-d754b449dccc-kube-api-access-4xq9g\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.970036 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.970067 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-logs\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.970108 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.970410 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.970423 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f274b0-10d6-4bbb-bb77-882ad008b40e-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.970442 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-logs\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.970607 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.973301 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.974037 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f274b0-10d6-4bbb-bb77-882ad008b40e-kube-api-access-zqtj9" (OuterVolumeSpecName: "kube-api-access-zqtj9") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "kube-api-access-zqtj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.977408 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.982886 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-scripts" (OuterVolumeSpecName: "scripts") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.983505 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.993846 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xq9g\" (UniqueName: \"kubernetes.io/projected/1a778946-8c19-4d5e-9071-d754b449dccc-kube-api-access-4xq9g\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:56 crc kubenswrapper[4681]: I1123 06:58:56.996915 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.000222 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.001121 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.010088 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.014906 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " pod="openstack/glance-default-external-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.041852 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.058989 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.071029 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.074003 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-config-data\") pod \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.074119 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-combined-ca-bundle\") pod \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.074160 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-fernet-keys\") pod \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.074223 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-credential-keys\") pod \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.074289 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr5h9\" (UniqueName: \"kubernetes.io/projected/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-kube-api-access-nr5h9\") pod \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.074340 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-scripts\") pod \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\" (UID: \"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c\") " Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.074796 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-config-data" (OuterVolumeSpecName: "config-data") pod "a2f274b0-10d6-4bbb-bb77-882ad008b40e" (UID: "a2f274b0-10d6-4bbb-bb77-882ad008b40e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.077917 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.077941 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.077953 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqtj9\" (UniqueName: \"kubernetes.io/projected/a2f274b0-10d6-4bbb-bb77-882ad008b40e-kube-api-access-zqtj9\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.077964 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.077975 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.077983 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f274b0-10d6-4bbb-bb77-882ad008b40e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.080851 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" (UID: "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.080874 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" (UID: "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.084285 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-kube-api-access-nr5h9" (OuterVolumeSpecName: "kube-api-access-nr5h9") pod "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" (UID: "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c"). InnerVolumeSpecName "kube-api-access-nr5h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.084521 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-scripts" (OuterVolumeSpecName: "scripts") pod "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" (UID: "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.096586 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" (UID: "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.143893 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-config-data" (OuterVolumeSpecName: "config-data") pod "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" (UID: "8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.181960 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.182085 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.182115 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.182125 4681 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.182135 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr5h9\" (UniqueName: \"kubernetes.io/projected/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-kube-api-access-nr5h9\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.182143 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.266867 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550d99c6-05a8-4019-b949-d8e57a7fefc5" path="/var/lib/kubelet/pods/550d99c6-05a8-4019-b949-d8e57a7fefc5/volumes" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.544071 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gq2qv"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.549028 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gq2qv" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.549050 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gq2qv" event={"ID":"8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c","Type":"ContainerDied","Data":"0bd0bfdf7aaab7bbac5a913dead68629324a60bd707fad384f9f55229e97ac35"} Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.549116 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd0bfdf7aaab7bbac5a913dead68629324a60bd707fad384f9f55229e97ac35" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.551110 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gq2qv"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.554327 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2f274b0-10d6-4bbb-bb77-882ad008b40e","Type":"ContainerDied","Data":"93743c76a626d0a7e31e061808341deda1ee248af6e6b08e0b748a0aad075d29"} Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.554360 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.554370 4681 scope.go:117] "RemoveContainer" containerID="da00007d6618b2272ce96b3e82bd0a97423326c7b1890c79a7e489e978a75f0f" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.648484 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b8mwm"] Nov 23 06:58:57 crc kubenswrapper[4681]: E1123 06:58:57.648896 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-httpd" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.648916 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-httpd" Nov 23 06:58:57 crc kubenswrapper[4681]: E1123 06:58:57.648930 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" containerName="keystone-bootstrap" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.648939 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" containerName="keystone-bootstrap" Nov 23 06:58:57 crc kubenswrapper[4681]: E1123 06:58:57.648959 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-log" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.648965 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-log" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.649140 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" containerName="keystone-bootstrap" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.649161 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-log" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.649175 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" containerName="glance-httpd" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.652416 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.655976 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k72qg" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.656126 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.656258 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.656379 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.656735 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.672818 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b8mwm"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.700444 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.725928 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.749419 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.754052 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.780638 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.785801 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.797854 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-scripts\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.802265 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-fernet-keys\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.802731 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-combined-ca-bundle\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.824284 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2dqc\" (UniqueName: \"kubernetes.io/projected/5dd5ce32-831b-448a-943f-7e3250ca172b-kube-api-access-g2dqc\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.824369 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-config-data\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.824440 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-credential-keys\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.806467 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.824855 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927244 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-scripts\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927289 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927315 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927352 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-fernet-keys\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927433 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927536 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927568 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927586 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-combined-ca-bundle\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927605 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927639 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2dqc\" (UniqueName: \"kubernetes.io/projected/5dd5ce32-831b-448a-943f-7e3250ca172b-kube-api-access-g2dqc\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927669 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42bvd\" (UniqueName: \"kubernetes.io/projected/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-kube-api-access-42bvd\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927689 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-config-data\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927727 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-credential-keys\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.927785 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.934990 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-config-data\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.939490 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-combined-ca-bundle\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.942158 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-credential-keys\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.949941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-fernet-keys\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.955073 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2dqc\" (UniqueName: \"kubernetes.io/projected/5dd5ce32-831b-448a-943f-7e3250ca172b-kube-api-access-g2dqc\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.967749 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-scripts\") pod \"keystone-bootstrap-b8mwm\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:57 crc kubenswrapper[4681]: I1123 06:58:57.982218 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.030418 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.030497 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.030632 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.030734 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.030793 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.030815 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.030882 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42bvd\" (UniqueName: \"kubernetes.io/projected/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-kube-api-access-42bvd\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.031014 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.032359 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.035915 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.036067 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.047425 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.049595 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.049625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.053403 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.057418 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42bvd\" (UniqueName: \"kubernetes.io/projected/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-kube-api-access-42bvd\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.077863 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.089954 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.630213 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5444c6b5-7cd6d"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.687286 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.703616 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c48d564b8-5tf9h"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.705213 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.707189 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.721086 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c48d564b8-5tf9h"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.755512 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-845ccd5479-79qz5"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.783064 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-fcdb4576d-g8stp"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.784703 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.801135 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fcdb4576d-g8stp"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.818764 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.850033 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-tls-certs\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.850092 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcf8\" (UniqueName: \"kubernetes.io/projected/21819725-3a3a-448c-8bda-e78701b78360-kube-api-access-swcf8\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.850220 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-config-data\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.850252 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-scripts\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.850288 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21819725-3a3a-448c-8bda-e78701b78360-logs\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.850307 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-secret-key\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.850340 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-combined-ca-bundle\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.952516 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swcf8\" (UniqueName: \"kubernetes.io/projected/21819725-3a3a-448c-8bda-e78701b78360-kube-api-access-swcf8\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.953500 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfa433c-2b77-4373-877f-5c92a2b39fb8-logs\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.953564 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rfq\" (UniqueName: \"kubernetes.io/projected/bdfa433c-2b77-4373-877f-5c92a2b39fb8-kube-api-access-k9rfq\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.953617 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-horizon-secret-key\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.953674 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-config-data\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.953703 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-scripts\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.953732 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21819725-3a3a-448c-8bda-e78701b78360-logs\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.953755 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-secret-key\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.954162 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-horizon-tls-certs\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.954202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-combined-ca-bundle\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.954240 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-combined-ca-bundle\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.954259 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdfa433c-2b77-4373-877f-5c92a2b39fb8-config-data\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.954297 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdfa433c-2b77-4373-877f-5c92a2b39fb8-scripts\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.954368 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-tls-certs\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.955654 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21819725-3a3a-448c-8bda-e78701b78360-logs\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.956777 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-config-data\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.957480 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-scripts\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.958301 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-tls-certs\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.966659 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-combined-ca-bundle\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.968138 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swcf8\" (UniqueName: \"kubernetes.io/projected/21819725-3a3a-448c-8bda-e78701b78360-kube-api-access-swcf8\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:58 crc kubenswrapper[4681]: I1123 06:58:58.968444 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-secret-key\") pod \"horizon-7c48d564b8-5tf9h\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.040802 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.060768 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdfa433c-2b77-4373-877f-5c92a2b39fb8-scripts\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.060878 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfa433c-2b77-4373-877f-5c92a2b39fb8-logs\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.060929 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rfq\" (UniqueName: \"kubernetes.io/projected/bdfa433c-2b77-4373-877f-5c92a2b39fb8-kube-api-access-k9rfq\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.060976 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-horizon-secret-key\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.061052 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-horizon-tls-certs\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.061093 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-combined-ca-bundle\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.061114 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdfa433c-2b77-4373-877f-5c92a2b39fb8-config-data\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.061998 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdfa433c-2b77-4373-877f-5c92a2b39fb8-scripts\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.062529 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfa433c-2b77-4373-877f-5c92a2b39fb8-logs\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.067719 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-horizon-tls-certs\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.069146 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-combined-ca-bundle\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.074112 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdfa433c-2b77-4373-877f-5c92a2b39fb8-horizon-secret-key\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.074268 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdfa433c-2b77-4373-877f-5c92a2b39fb8-config-data\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.080557 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rfq\" (UniqueName: \"kubernetes.io/projected/bdfa433c-2b77-4373-877f-5c92a2b39fb8-kube-api-access-k9rfq\") pod \"horizon-fcdb4576d-g8stp\" (UID: \"bdfa433c-2b77-4373-877f-5c92a2b39fb8\") " pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.101830 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.265193 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c" path="/var/lib/kubelet/pods/8ffd46f6-55fc-41e0-964b-60d0d1d7cb8c/volumes" Nov 23 06:58:59 crc kubenswrapper[4681]: I1123 06:58:59.266133 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f274b0-10d6-4bbb-bb77-882ad008b40e" path="/var/lib/kubelet/pods/a2f274b0-10d6-4bbb-bb77-882ad008b40e/volumes" Nov 23 06:59:00 crc kubenswrapper[4681]: I1123 06:59:00.406217 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:59:00 crc kubenswrapper[4681]: I1123 06:59:00.454032 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dfd8c6765-5kmzt"] Nov 23 06:59:00 crc kubenswrapper[4681]: I1123 06:59:00.454285 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" containerID="cri-o://54b063d5687b2b79c2d3555edf1df8d56054061b0344cad12fc8e7d08350a575" gracePeriod=10 Nov 23 06:59:00 crc kubenswrapper[4681]: I1123 06:59:00.603396 4681 generic.go:334] "Generic (PLEG): container finished" podID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerID="54b063d5687b2b79c2d3555edf1df8d56054061b0344cad12fc8e7d08350a575" exitCode=0 Nov 23 06:59:00 crc kubenswrapper[4681]: I1123 06:59:00.603443 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" event={"ID":"71e0935b-e717-4e96-ae02-fb6bcb85bae5","Type":"ContainerDied","Data":"54b063d5687b2b79c2d3555edf1df8d56054061b0344cad12fc8e7d08350a575"} Nov 23 06:59:02 crc kubenswrapper[4681]: I1123 06:59:02.627582 4681 generic.go:334] "Generic (PLEG): container finished" podID="4cc57e44-7957-4d3a-b9c9-2da622ea38a0" containerID="3295b4bd261ee97327198f543bfa8e15d7d22cf8363d391dcfb4f63e8553275a" exitCode=0 Nov 23 06:59:02 crc kubenswrapper[4681]: I1123 06:59:02.627652 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xbhpv" event={"ID":"4cc57e44-7957-4d3a-b9c9-2da622ea38a0","Type":"ContainerDied","Data":"3295b4bd261ee97327198f543bfa8e15d7d22cf8363d391dcfb4f63e8553275a"} Nov 23 06:59:04 crc kubenswrapper[4681]: I1123 06:59:04.135842 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.129575 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.130031 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.130148 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2d7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qn8qf_openstack(31fd09f2-734b-4427-8b5b-65711b24bbb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.132210 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qn8qf" podUID="31fd09f2-734b-4427-8b5b-65711b24bbb5" Nov 23 06:59:10 crc kubenswrapper[4681]: W1123 06:59:10.132305 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a778946_8c19_4d5e_9071_d754b449dccc.slice/crio-fc6cea7d545b008a6d3886ab582641a1503094f7907d461c33f67ce7b028817a WatchSource:0}: Error finding container fc6cea7d545b008a6d3886ab582641a1503094f7907d461c33f67ce7b028817a: Status 404 returned error can't find the container with id fc6cea7d545b008a6d3886ab582641a1503094f7907d461c33f67ce7b028817a Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.145256 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.145330 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.145527 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h597h5bbh64bh67fh5c7h79h584h686hd7h584h55h9fhb8h8chbdh56dh7fh656h544h698h5d4h5h5f7h567h5f9h5b5hb8h57h678h5d4h59dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29h29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6994f59557-zb5qf_openstack(45b8faad-ff9c-4acb-bcf3-9b1efccbce7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.147838 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"]" pod="openstack/horizon-6994f59557-zb5qf" podUID="45b8faad-ff9c-4acb-bcf3-9b1efccbce7d" Nov 23 06:59:10 crc kubenswrapper[4681]: I1123 06:59:10.729876 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a778946-8c19-4d5e-9071-d754b449dccc","Type":"ContainerStarted","Data":"fc6cea7d545b008a6d3886ab582641a1503094f7907d461c33f67ce7b028817a"} Nov 23 06:59:10 crc kubenswrapper[4681]: E1123 06:59:10.732011 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/placement-db-sync-qn8qf" podUID="31fd09f2-734b-4427-8b5b-65711b24bbb5" Nov 23 06:59:14 crc kubenswrapper[4681]: E1123 06:59:14.112635 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:14 crc kubenswrapper[4681]: E1123 06:59:14.113364 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:14 crc kubenswrapper[4681]: E1123 06:59:14.114007 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-426hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-fbbdq_openstack(00916d9f-8ce3-47d9-a32f-e2deb3514ede): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:59:14 crc kubenswrapper[4681]: E1123 06:59:14.115146 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-fbbdq" podUID="00916d9f-8ce3-47d9-a32f-e2deb3514ede" Nov 23 06:59:14 crc kubenswrapper[4681]: I1123 06:59:14.137100 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Nov 23 06:59:14 crc kubenswrapper[4681]: E1123 06:59:14.788966 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/heat-db-sync-fbbdq" podUID="00916d9f-8ce3-47d9-a32f-e2deb3514ede" Nov 23 06:59:19 crc kubenswrapper[4681]: I1123 06:59:19.137555 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Nov 23 06:59:19 crc kubenswrapper[4681]: I1123 06:59:19.139223 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.455201 4681 scope.go:117] "RemoveContainer" containerID="c88980b76a1eef7772698b29fb054eea468341260222451c6805e1d0f4a9313d" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.518825 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.522573 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.526496 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720457 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-nb\") pod \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720641 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-config\") pod \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720671 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-dns-svc\") pod \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720694 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-config\") pod \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720716 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-sb\") pod \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720790 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-horizon-secret-key\") pod \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720829 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-combined-ca-bundle\") pod \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720860 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlm4f\" (UniqueName: \"kubernetes.io/projected/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-kube-api-access-qlm4f\") pod \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\" (UID: \"4cc57e44-7957-4d3a-b9c9-2da622ea38a0\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720915 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-config-data\") pod \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.720989 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-scripts\") pod \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.721024 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29h29\" (UniqueName: \"kubernetes.io/projected/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-kube-api-access-29h29\") pod \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.721073 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-logs\") pod \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\" (UID: \"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.721101 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzttm\" (UniqueName: \"kubernetes.io/projected/71e0935b-e717-4e96-ae02-fb6bcb85bae5-kube-api-access-dzttm\") pod \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\" (UID: \"71e0935b-e717-4e96-ae02-fb6bcb85bae5\") " Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.723292 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-scripts" (OuterVolumeSpecName: "scripts") pod "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d" (UID: "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.724000 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-config-data" (OuterVolumeSpecName: "config-data") pod "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d" (UID: "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.728452 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-kube-api-access-29h29" (OuterVolumeSpecName: "kube-api-access-29h29") pod "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d" (UID: "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d"). InnerVolumeSpecName "kube-api-access-29h29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.731533 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e0935b-e717-4e96-ae02-fb6bcb85bae5-kube-api-access-dzttm" (OuterVolumeSpecName: "kube-api-access-dzttm") pod "71e0935b-e717-4e96-ae02-fb6bcb85bae5" (UID: "71e0935b-e717-4e96-ae02-fb6bcb85bae5"). InnerVolumeSpecName "kube-api-access-dzttm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.731664 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-logs" (OuterVolumeSpecName: "logs") pod "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d" (UID: "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.745833 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-kube-api-access-qlm4f" (OuterVolumeSpecName: "kube-api-access-qlm4f") pod "4cc57e44-7957-4d3a-b9c9-2da622ea38a0" (UID: "4cc57e44-7957-4d3a-b9c9-2da622ea38a0"). InnerVolumeSpecName "kube-api-access-qlm4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.747728 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d" (UID: "45b8faad-ff9c-4acb-bcf3-9b1efccbce7d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.764575 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cc57e44-7957-4d3a-b9c9-2da622ea38a0" (UID: "4cc57e44-7957-4d3a-b9c9-2da622ea38a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.770561 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-config" (OuterVolumeSpecName: "config") pod "4cc57e44-7957-4d3a-b9c9-2da622ea38a0" (UID: "4cc57e44-7957-4d3a-b9c9-2da622ea38a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.777903 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-config" (OuterVolumeSpecName: "config") pod "71e0935b-e717-4e96-ae02-fb6bcb85bae5" (UID: "71e0935b-e717-4e96-ae02-fb6bcb85bae5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.781978 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71e0935b-e717-4e96-ae02-fb6bcb85bae5" (UID: "71e0935b-e717-4e96-ae02-fb6bcb85bae5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.795027 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71e0935b-e717-4e96-ae02-fb6bcb85bae5" (UID: "71e0935b-e717-4e96-ae02-fb6bcb85bae5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.796371 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71e0935b-e717-4e96-ae02-fb6bcb85bae5" (UID: "71e0935b-e717-4e96-ae02-fb6bcb85bae5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823901 4681 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823929 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823940 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlm4f\" (UniqueName: \"kubernetes.io/projected/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-kube-api-access-qlm4f\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823955 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823965 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823975 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29h29\" (UniqueName: \"kubernetes.io/projected/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-kube-api-access-29h29\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823985 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.823995 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzttm\" (UniqueName: \"kubernetes.io/projected/71e0935b-e717-4e96-ae02-fb6bcb85bae5-kube-api-access-dzttm\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.824008 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.824017 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cc57e44-7957-4d3a-b9c9-2da622ea38a0-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.824026 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.824036 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.824045 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71e0935b-e717-4e96-ae02-fb6bcb85bae5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.857851 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" event={"ID":"71e0935b-e717-4e96-ae02-fb6bcb85bae5","Type":"ContainerDied","Data":"10e7fab16dc9034de1c0e893529aad3e4f725fc17d9a2753bb10d396011ee7ff"} Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.857879 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.860186 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xbhpv" event={"ID":"4cc57e44-7957-4d3a-b9c9-2da622ea38a0","Type":"ContainerDied","Data":"0d3c7b5fb4cdb8cd50b69b165da0743b00612c930e385e0aac977a4dd13367d3"} Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.860238 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3c7b5fb4cdb8cd50b69b165da0743b00612c930e385e0aac977a4dd13367d3" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.860275 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xbhpv" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.862112 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6994f59557-zb5qf" event={"ID":"45b8faad-ff9c-4acb-bcf3-9b1efccbce7d","Type":"ContainerDied","Data":"4eb84936d1e1012d016667b08a4189250a57ffb6a3d41449279ef1985c71dbb5"} Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.862178 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6994f59557-zb5qf" Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.908293 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dfd8c6765-5kmzt"] Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.920419 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dfd8c6765-5kmzt"] Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.942638 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6994f59557-zb5qf"] Nov 23 06:59:21 crc kubenswrapper[4681]: I1123 06:59:21.951220 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6994f59557-zb5qf"] Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.051038 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.051697 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.052132 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjdl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-frn6w_openstack(95e9b025-0fa7-4a41-a18c-e4f078b82c43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.053351 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-frn6w" podUID="95e9b025-0fa7-4a41-a18c-e4f078b82c43" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.752846 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ccdb5d4d7-892kp"] Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.753143 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.753156 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.753180 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="init" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.753186 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="init" Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.753197 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc57e44-7957-4d3a-b9c9-2da622ea38a0" containerName="neutron-db-sync" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.753204 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc57e44-7957-4d3a-b9c9-2da622ea38a0" containerName="neutron-db-sync" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.753375 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.753390 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc57e44-7957-4d3a-b9c9-2da622ea38a0" containerName="neutron-db-sync" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.754248 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.779799 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ccdb5d4d7-892kp"] Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.810745 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-759dcb765b-std9h"] Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.812262 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.817421 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.817492 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sc6p8" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.820497 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.820805 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.853568 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-759dcb765b-std9h"] Nov 23 06:59:22 crc kubenswrapper[4681]: E1123 06:59:22.882442 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/barbican-db-sync-frn6w" podUID="95e9b025-0fa7-4a41-a18c-e4f078b82c43" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970099 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-sb\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970252 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdnh\" (UniqueName: \"kubernetes.io/projected/abe896c0-87f4-4c4c-b23a-81a10a557aed-kube-api-access-vxdnh\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970372 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-config\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970522 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-ovndb-tls-certs\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970614 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-swift-storage-0\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970704 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-svc\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970776 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-combined-ca-bundle\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970854 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-nb\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.970931 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-httpd-config\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.971022 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcgnr\" (UniqueName: \"kubernetes.io/projected/ce77efdd-12fa-4f7c-9268-05c1634d7da3-kube-api-access-tcgnr\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:22 crc kubenswrapper[4681]: I1123 06:59:22.971084 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-config\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.073907 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-sb\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.073975 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdnh\" (UniqueName: \"kubernetes.io/projected/abe896c0-87f4-4c4c-b23a-81a10a557aed-kube-api-access-vxdnh\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074015 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-config\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074113 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-ovndb-tls-certs\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074140 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-swift-storage-0\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074173 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-svc\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074193 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-combined-ca-bundle\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074221 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-nb\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074256 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-httpd-config\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074318 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcgnr\" (UniqueName: \"kubernetes.io/projected/ce77efdd-12fa-4f7c-9268-05c1634d7da3-kube-api-access-tcgnr\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.074341 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-config\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.075723 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-config\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.078324 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-nb\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.079195 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-swift-storage-0\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.080586 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-svc\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.085129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-sb\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.096611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-ovndb-tls-certs\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.097072 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-httpd-config\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.097735 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdnh\" (UniqueName: \"kubernetes.io/projected/abe896c0-87f4-4c4c-b23a-81a10a557aed-kube-api-access-vxdnh\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.098872 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-config\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.101642 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-combined-ca-bundle\") pod \"neutron-759dcb765b-std9h\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.105574 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcgnr\" (UniqueName: \"kubernetes.io/projected/ce77efdd-12fa-4f7c-9268-05c1634d7da3-kube-api-access-tcgnr\") pod \"dnsmasq-dns-7ccdb5d4d7-892kp\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.133093 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.288614 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b8faad-ff9c-4acb-bcf3-9b1efccbce7d" path="/var/lib/kubelet/pods/45b8faad-ff9c-4acb-bcf3-9b1efccbce7d/volumes" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.292807 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" path="/var/lib/kubelet/pods/71e0935b-e717-4e96-ae02-fb6bcb85bae5/volumes" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.380738 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:23 crc kubenswrapper[4681]: E1123 06:59:23.594537 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:23 crc kubenswrapper[4681]: E1123 06:59:23.594879 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 06:59:23 crc kubenswrapper[4681]: E1123 06:59:23.595028 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwbq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4gs5w_openstack(d426ed81-18f9-441e-9865-b9a6d683931f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 06:59:23 crc kubenswrapper[4681]: E1123 06:59:23.596426 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4gs5w" podUID="d426ed81-18f9-441e-9865-b9a6d683931f" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.615230 4681 scope.go:117] "RemoveContainer" containerID="54b063d5687b2b79c2d3555edf1df8d56054061b0344cad12fc8e7d08350a575" Nov 23 06:59:23 crc kubenswrapper[4681]: I1123 06:59:23.764726 4681 scope.go:117] "RemoveContainer" containerID="2f26d9c0bf47228b1160e9514be0c5bb88e90e0ed12d162c4c7f5b4f7eece67a" Nov 23 06:59:23 crc kubenswrapper[4681]: E1123 06:59:23.931883 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/cinder-db-sync-4gs5w" podUID="d426ed81-18f9-441e-9865-b9a6d683931f" Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.060165 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b8mwm"] Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.074662 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.140017 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7dfd8c6765-5kmzt" podUID="71e0935b-e717-4e96-ae02-fb6bcb85bae5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.341805 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:59:24 crc kubenswrapper[4681]: W1123 06:59:24.342190 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdfa433c_2b77_4373_877f_5c92a2b39fb8.slice/crio-7767515e364cce4c198729bb82bba7bef4c60dc5491cc39596e1a280f009f237 WatchSource:0}: Error finding container 7767515e364cce4c198729bb82bba7bef4c60dc5491cc39596e1a280f009f237: Status 404 returned error can't find the container with id 7767515e364cce4c198729bb82bba7bef4c60dc5491cc39596e1a280f009f237 Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.349146 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c48d564b8-5tf9h"] Nov 23 06:59:24 crc kubenswrapper[4681]: W1123 06:59:24.350004 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc8fc37d_ed28_4d24_8d09_aa94e5f7eaa0.slice/crio-5158f2960a3633f8ab8ce21ee98e2e3b17f9b7a3902ab303e14f1c3a679d6b9e WatchSource:0}: Error finding container 5158f2960a3633f8ab8ce21ee98e2e3b17f9b7a3902ab303e14f1c3a679d6b9e: Status 404 returned error can't find the container with id 5158f2960a3633f8ab8ce21ee98e2e3b17f9b7a3902ab303e14f1c3a679d6b9e Nov 23 06:59:24 crc kubenswrapper[4681]: W1123 06:59:24.352879 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21819725_3a3a_448c_8bda_e78701b78360.slice/crio-9bd64d993c1959bd532de5ce79ec6ecd4e771d56ba852e7dc4478ed5ae91185a WatchSource:0}: Error finding container 9bd64d993c1959bd532de5ce79ec6ecd4e771d56ba852e7dc4478ed5ae91185a: Status 404 returned error can't find the container with id 9bd64d993c1959bd532de5ce79ec6ecd4e771d56ba852e7dc4478ed5ae91185a Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.360672 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fcdb4576d-g8stp"] Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.509084 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ccdb5d4d7-892kp"] Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.656621 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-759dcb765b-std9h"] Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.923388 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5444c6b5-7cd6d" event={"ID":"203e0f9e-791d-4b8e-9521-b7b334fcacf6","Type":"ContainerStarted","Data":"fd0a88d8aa81cd1911a38df63de2d67edad47e7de3b17fb3653538d94febcd1a"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.923684 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5444c6b5-7cd6d" event={"ID":"203e0f9e-791d-4b8e-9521-b7b334fcacf6","Type":"ContainerStarted","Data":"8b83e6b57ed80aa6780ce5641bbb95a07b50733d3dccf25e7ab868a5610bfc13"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.923820 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c5444c6b5-7cd6d" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon-log" containerID="cri-o://8b83e6b57ed80aa6780ce5641bbb95a07b50733d3dccf25e7ab868a5610bfc13" gracePeriod=30 Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.924052 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c5444c6b5-7cd6d" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon" containerID="cri-o://fd0a88d8aa81cd1911a38df63de2d67edad47e7de3b17fb3653538d94febcd1a" gracePeriod=30 Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.934399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcdb4576d-g8stp" event={"ID":"bdfa433c-2b77-4373-877f-5c92a2b39fb8","Type":"ContainerStarted","Data":"70c3f1fd8ade5dd0264972c3cd32a331396650c68438e92330d0aaa56b157fa8"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.934687 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcdb4576d-g8stp" event={"ID":"bdfa433c-2b77-4373-877f-5c92a2b39fb8","Type":"ContainerStarted","Data":"7767515e364cce4c198729bb82bba7bef4c60dc5491cc39596e1a280f009f237"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.951568 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c5444c6b5-7cd6d" podStartSLOduration=3.410217388 podStartE2EDuration="35.951547119s" podCreationTimestamp="2025-11-23 06:58:49 +0000 UTC" firstStartedPulling="2025-11-23 06:58:51.022579656 +0000 UTC m=+868.092088893" lastFinishedPulling="2025-11-23 06:59:23.563909386 +0000 UTC m=+900.633418624" observedRunningTime="2025-11-23 06:59:24.944884436 +0000 UTC m=+902.014393672" watchObservedRunningTime="2025-11-23 06:59:24.951547119 +0000 UTC m=+902.021056357" Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.959305 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759dcb765b-std9h" event={"ID":"abe896c0-87f4-4c4c-b23a-81a10a557aed","Type":"ContainerStarted","Data":"836e9d59fe78695ab2f6efe33e5045d1483ae7f356ffc0836841859f7044a265"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.962597 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerStarted","Data":"7bf62d391c99d2c553a79853ac349df2afdefe4ce3af717f8c6fe444384be9ec"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.974626 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a778946-8c19-4d5e-9071-d754b449dccc","Type":"ContainerStarted","Data":"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.977836 4681 generic.go:334] "Generic (PLEG): container finished" podID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerID="8541794a357af6ef37dd29f8d7f2b1469bcb33969532de1eae2fdcae8d1fe8ef" exitCode=0 Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.977885 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" event={"ID":"ce77efdd-12fa-4f7c-9268-05c1634d7da3","Type":"ContainerDied","Data":"8541794a357af6ef37dd29f8d7f2b1469bcb33969532de1eae2fdcae8d1fe8ef"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.977902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" event={"ID":"ce77efdd-12fa-4f7c-9268-05c1634d7da3","Type":"ContainerStarted","Data":"6068fbdae34c136b157e88273284f026770f032e0ee0dff0e2204c2d34b94d54"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.991785 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c48d564b8-5tf9h" event={"ID":"21819725-3a3a-448c-8bda-e78701b78360","Type":"ContainerStarted","Data":"31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.991814 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c48d564b8-5tf9h" event={"ID":"21819725-3a3a-448c-8bda-e78701b78360","Type":"ContainerStarted","Data":"9bd64d993c1959bd532de5ce79ec6ecd4e771d56ba852e7dc4478ed5ae91185a"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.994988 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8mwm" event={"ID":"5dd5ce32-831b-448a-943f-7e3250ca172b","Type":"ContainerStarted","Data":"fbfbecec9249e290de376cecaf8ce397d63bedb12a63815fac8bc51df3bfbd1f"} Nov 23 06:59:24 crc kubenswrapper[4681]: I1123 06:59:24.995028 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8mwm" event={"ID":"5dd5ce32-831b-448a-943f-7e3250ca172b","Type":"ContainerStarted","Data":"865e0b92fed9126dd7c914380d3e1401bbe91a07a8233d337e5b808de2ace840"} Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.014588 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845ccd5479-79qz5" event={"ID":"2f95ab62-e0ad-4566-bbfd-29e2ad374edf","Type":"ContainerStarted","Data":"da1db64be7e782ae46e4c5e141005925fec41eddf0809a920f923eafae375c41"} Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.014627 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845ccd5479-79qz5" event={"ID":"2f95ab62-e0ad-4566-bbfd-29e2ad374edf","Type":"ContainerStarted","Data":"405fcd023d71625841d6784694bca0ce578e0b3ccf94cb59330d90d166178b17"} Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.014737 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-845ccd5479-79qz5" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon-log" containerID="cri-o://405fcd023d71625841d6784694bca0ce578e0b3ccf94cb59330d90d166178b17" gracePeriod=30 Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.014805 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-845ccd5479-79qz5" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon" containerID="cri-o://da1db64be7e782ae46e4c5e141005925fec41eddf0809a920f923eafae375c41" gracePeriod=30 Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.017581 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0","Type":"ContainerStarted","Data":"5158f2960a3633f8ab8ce21ee98e2e3b17f9b7a3902ab303e14f1c3a679d6b9e"} Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.030757 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b8mwm" podStartSLOduration=28.030742135 podStartE2EDuration="28.030742135s" podCreationTimestamp="2025-11-23 06:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:25.025143466 +0000 UTC m=+902.094652702" watchObservedRunningTime="2025-11-23 06:59:25.030742135 +0000 UTC m=+902.100251371" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.050360 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-845ccd5479-79qz5" podStartSLOduration=3.367944229 podStartE2EDuration="34.050342487s" podCreationTimestamp="2025-11-23 06:58:51 +0000 UTC" firstStartedPulling="2025-11-23 06:58:52.986487963 +0000 UTC m=+870.055997200" lastFinishedPulling="2025-11-23 06:59:23.668886222 +0000 UTC m=+900.738395458" observedRunningTime="2025-11-23 06:59:25.04820993 +0000 UTC m=+902.117719166" watchObservedRunningTime="2025-11-23 06:59:25.050342487 +0000 UTC m=+902.119851724" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.716916 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7dd5999bb7-tlr49"] Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.718762 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.726010 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.728476 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.744043 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dd5999bb7-tlr49"] Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.869960 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-combined-ca-bundle\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.870015 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-config\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.870047 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nnxr\" (UniqueName: \"kubernetes.io/projected/32e94a1b-a08e-4fa2-ae50-f74e280addff-kube-api-access-4nnxr\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.870072 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-httpd-config\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.870091 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-ovndb-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.870207 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-public-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.870302 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-internal-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.971410 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-httpd-config\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.971658 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-ovndb-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.971744 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-public-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.971775 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-internal-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.971842 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-combined-ca-bundle\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.971876 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-config\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.971902 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nnxr\" (UniqueName: \"kubernetes.io/projected/32e94a1b-a08e-4fa2-ae50-f74e280addff-kube-api-access-4nnxr\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.979234 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-httpd-config\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.979625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-combined-ca-bundle\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.982048 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-public-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.982129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-ovndb-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.982569 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-internal-tls-certs\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.984537 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-config\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:25 crc kubenswrapper[4681]: I1123 06:59:25.998693 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nnxr\" (UniqueName: \"kubernetes.io/projected/32e94a1b-a08e-4fa2-ae50-f74e280addff-kube-api-access-4nnxr\") pod \"neutron-7dd5999bb7-tlr49\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.037639 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcdb4576d-g8stp" event={"ID":"bdfa433c-2b77-4373-877f-5c92a2b39fb8","Type":"ContainerStarted","Data":"f940cdcb178170ebf29c7591f70bc1b658fd92fed2c294459eb2f16f26d69ceb"} Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.045078 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.045544 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759dcb765b-std9h" event={"ID":"abe896c0-87f4-4c4c-b23a-81a10a557aed","Type":"ContainerStarted","Data":"aa66d58e3366f90d416b2c24908e0e060d82706229b4fad9d8e1cd986edae3bf"} Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.045583 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759dcb765b-std9h" event={"ID":"abe896c0-87f4-4c4c-b23a-81a10a557aed","Type":"ContainerStarted","Data":"7cc2ab3f82b6b7f29bfde6f35b40da9fdbb3b525f8acef809a105402bf70e395"} Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.046070 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.051451 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0","Type":"ContainerStarted","Data":"1d66757cdfe145f037ebff24dc5730a997fecc45ebc3d2c40ebf4132ec0ffc7e"} Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.064705 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-fcdb4576d-g8stp" podStartSLOduration=28.064693334 podStartE2EDuration="28.064693334s" podCreationTimestamp="2025-11-23 06:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:26.057270338 +0000 UTC m=+903.126779575" watchObservedRunningTime="2025-11-23 06:59:26.064693334 +0000 UTC m=+903.134202562" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.069563 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a778946-8c19-4d5e-9071-d754b449dccc","Type":"ContainerStarted","Data":"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d"} Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.069810 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-log" containerID="cri-o://9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691" gracePeriod=30 Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.069966 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-httpd" containerID="cri-o://43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d" gracePeriod=30 Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.082809 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" event={"ID":"ce77efdd-12fa-4f7c-9268-05c1634d7da3","Type":"ContainerStarted","Data":"5d3b9b18c19d40b4875a0795be196763470210354d7a0ac2916665447d7ced82"} Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.083386 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.099956 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-759dcb765b-std9h" podStartSLOduration=4.099941782 podStartE2EDuration="4.099941782s" podCreationTimestamp="2025-11-23 06:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:26.079931748 +0000 UTC m=+903.149440985" watchObservedRunningTime="2025-11-23 06:59:26.099941782 +0000 UTC m=+903.169451020" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.108110 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c48d564b8-5tf9h" event={"ID":"21819725-3a3a-448c-8bda-e78701b78360","Type":"ContainerStarted","Data":"f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad"} Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.119633 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=30.119616846 podStartE2EDuration="30.119616846s" podCreationTimestamp="2025-11-23 06:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:26.118558461 +0000 UTC m=+903.188067718" watchObservedRunningTime="2025-11-23 06:59:26.119616846 +0000 UTC m=+903.189126083" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.137504 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c48d564b8-5tf9h" podStartSLOduration=28.13748797 podStartE2EDuration="28.13748797s" podCreationTimestamp="2025-11-23 06:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:26.135683851 +0000 UTC m=+903.205193088" watchObservedRunningTime="2025-11-23 06:59:26.13748797 +0000 UTC m=+903.206997207" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.228595 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" podStartSLOduration=4.228575725 podStartE2EDuration="4.228575725s" podCreationTimestamp="2025-11-23 06:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:26.156851506 +0000 UTC m=+903.226360743" watchObservedRunningTime="2025-11-23 06:59:26.228575725 +0000 UTC m=+903.298084962" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.731561 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dd5999bb7-tlr49"] Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.797723 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.910978 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.911118 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xq9g\" (UniqueName: \"kubernetes.io/projected/1a778946-8c19-4d5e-9071-d754b449dccc-kube-api-access-4xq9g\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.911207 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-scripts\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.911339 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-httpd-run\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.911416 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-config-data\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.911576 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-logs\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.911651 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-public-tls-certs\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.911823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-combined-ca-bundle\") pod \"1a778946-8c19-4d5e-9071-d754b449dccc\" (UID: \"1a778946-8c19-4d5e-9071-d754b449dccc\") " Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.912907 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-logs" (OuterVolumeSpecName: "logs") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.913326 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.930034 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-scripts" (OuterVolumeSpecName: "scripts") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.930721 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a778946-8c19-4d5e-9071-d754b449dccc-kube-api-access-4xq9g" (OuterVolumeSpecName: "kube-api-access-4xq9g") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "kube-api-access-4xq9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.931116 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.966838 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:26 crc kubenswrapper[4681]: I1123 06:59:26.986654 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-config-data" (OuterVolumeSpecName: "config-data") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.002621 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a778946-8c19-4d5e-9071-d754b449dccc" (UID: "1a778946-8c19-4d5e-9071-d754b449dccc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013824 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xq9g\" (UniqueName: \"kubernetes.io/projected/1a778946-8c19-4d5e-9071-d754b449dccc-kube-api-access-4xq9g\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013850 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013862 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013874 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013882 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a778946-8c19-4d5e-9071-d754b449dccc-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013889 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013898 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a778946-8c19-4d5e-9071-d754b449dccc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.013922 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.043148 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.117549 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.130196 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0","Type":"ContainerStarted","Data":"52175a25519e28ed0b5115bac7daf9e55d3b943110874c45aa97720963f3e66a"} Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.130378 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-log" containerID="cri-o://1d66757cdfe145f037ebff24dc5730a997fecc45ebc3d2c40ebf4132ec0ffc7e" gracePeriod=30 Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.130849 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-httpd" containerID="cri-o://52175a25519e28ed0b5115bac7daf9e55d3b943110874c45aa97720963f3e66a" gracePeriod=30 Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.144900 4681 generic.go:334] "Generic (PLEG): container finished" podID="1a778946-8c19-4d5e-9071-d754b449dccc" containerID="43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d" exitCode=143 Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.144925 4681 generic.go:334] "Generic (PLEG): container finished" podID="1a778946-8c19-4d5e-9071-d754b449dccc" containerID="9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691" exitCode=143 Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.144959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a778946-8c19-4d5e-9071-d754b449dccc","Type":"ContainerDied","Data":"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d"} Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.144979 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a778946-8c19-4d5e-9071-d754b449dccc","Type":"ContainerDied","Data":"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691"} Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.144989 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a778946-8c19-4d5e-9071-d754b449dccc","Type":"ContainerDied","Data":"fc6cea7d545b008a6d3886ab582641a1503094f7907d461c33f67ce7b028817a"} Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.145006 4681 scope.go:117] "RemoveContainer" containerID="43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.145096 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.152893 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dd5999bb7-tlr49" event={"ID":"32e94a1b-a08e-4fa2-ae50-f74e280addff","Type":"ContainerStarted","Data":"c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a"} Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.152945 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dd5999bb7-tlr49" event={"ID":"32e94a1b-a08e-4fa2-ae50-f74e280addff","Type":"ContainerStarted","Data":"26592f75b68b54779c10bc3b4fc0c51752147f6c6eee5f694e9f2a7ccbf62030"} Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.162581 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qn8qf" event={"ID":"31fd09f2-734b-4427-8b5b-65711b24bbb5","Type":"ContainerStarted","Data":"8ce7b2fe24a9d0caf784ebd0d3fb31784d0f1791fd7508361e3ae865f66ef071"} Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.179053 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=30.179041776 podStartE2EDuration="30.179041776s" podCreationTimestamp="2025-11-23 06:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:27.172315181 +0000 UTC m=+904.241824419" watchObservedRunningTime="2025-11-23 06:59:27.179041776 +0000 UTC m=+904.248551014" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.204676 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qn8qf" podStartSLOduration=4.112618662 podStartE2EDuration="39.204658224s" podCreationTimestamp="2025-11-23 06:58:48 +0000 UTC" firstStartedPulling="2025-11-23 06:58:50.919713018 +0000 UTC m=+867.989222255" lastFinishedPulling="2025-11-23 06:59:26.011752591 +0000 UTC m=+903.081261817" observedRunningTime="2025-11-23 06:59:27.199854744 +0000 UTC m=+904.269363980" watchObservedRunningTime="2025-11-23 06:59:27.204658224 +0000 UTC m=+904.274167461" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.234010 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.239741 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.271881 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" path="/var/lib/kubelet/pods/1a778946-8c19-4d5e-9071-d754b449dccc/volumes" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.272591 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:59:27 crc kubenswrapper[4681]: E1123 06:59:27.272945 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-log" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.272961 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-log" Nov 23 06:59:27 crc kubenswrapper[4681]: E1123 06:59:27.272977 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-httpd" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.272985 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-httpd" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.273166 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-httpd" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.273192 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a778946-8c19-4d5e-9071-d754b449dccc" containerName="glance-log" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.274384 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.278512 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.284578 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.284745 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.426154 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-config-data\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.426252 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-scripts\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.426374 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-logs\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.426530 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzj5\" (UniqueName: \"kubernetes.io/projected/48ecd863-12ce-4eb3-ba76-eea730db3b2d-kube-api-access-tgzj5\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.426755 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.426799 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.426834 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.427006 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.529574 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzj5\" (UniqueName: \"kubernetes.io/projected/48ecd863-12ce-4eb3-ba76-eea730db3b2d-kube-api-access-tgzj5\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.529717 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.529744 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.529791 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.530089 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.530227 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.530263 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-config-data\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.530294 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-scripts\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.530344 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-logs\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.530777 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-logs\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.531006 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.538847 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.539178 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.542336 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-config-data\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.551536 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-scripts\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.552785 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzj5\" (UniqueName: \"kubernetes.io/projected/48ecd863-12ce-4eb3-ba76-eea730db3b2d-kube-api-access-tgzj5\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.587744 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " pod="openstack/glance-default-external-api-0" Nov 23 06:59:27 crc kubenswrapper[4681]: I1123 06:59:27.622889 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 06:59:28 crc kubenswrapper[4681]: I1123 06:59:28.091214 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:28 crc kubenswrapper[4681]: I1123 06:59:28.091278 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:28 crc kubenswrapper[4681]: I1123 06:59:28.174854 4681 generic.go:334] "Generic (PLEG): container finished" podID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerID="52175a25519e28ed0b5115bac7daf9e55d3b943110874c45aa97720963f3e66a" exitCode=143 Nov 23 06:59:28 crc kubenswrapper[4681]: I1123 06:59:28.174884 4681 generic.go:334] "Generic (PLEG): container finished" podID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerID="1d66757cdfe145f037ebff24dc5730a997fecc45ebc3d2c40ebf4132ec0ffc7e" exitCode=143 Nov 23 06:59:28 crc kubenswrapper[4681]: I1123 06:59:28.174951 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0","Type":"ContainerDied","Data":"52175a25519e28ed0b5115bac7daf9e55d3b943110874c45aa97720963f3e66a"} Nov 23 06:59:28 crc kubenswrapper[4681]: I1123 06:59:28.175018 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0","Type":"ContainerDied","Data":"1d66757cdfe145f037ebff24dc5730a997fecc45ebc3d2c40ebf4132ec0ffc7e"} Nov 23 06:59:28 crc kubenswrapper[4681]: I1123 06:59:28.964385 4681 scope.go:117] "RemoveContainer" containerID="9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.017700 4681 scope.go:117] "RemoveContainer" containerID="43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d" Nov 23 06:59:29 crc kubenswrapper[4681]: E1123 06:59:29.038189 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d\": container with ID starting with 43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d not found: ID does not exist" containerID="43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.038253 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d"} err="failed to get container status \"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d\": rpc error: code = NotFound desc = could not find container \"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d\": container with ID starting with 43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d not found: ID does not exist" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.038291 4681 scope.go:117] "RemoveContainer" containerID="9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.041606 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.043036 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 06:59:29 crc kubenswrapper[4681]: E1123 06:59:29.047296 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691\": container with ID starting with 9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691 not found: ID does not exist" containerID="9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.047348 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691"} err="failed to get container status \"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691\": rpc error: code = NotFound desc = could not find container \"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691\": container with ID starting with 9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691 not found: ID does not exist" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.047386 4681 scope.go:117] "RemoveContainer" containerID="43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.049106 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d"} err="failed to get container status \"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d\": rpc error: code = NotFound desc = could not find container \"43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d\": container with ID starting with 43fa7c5dfebf0547eaa47367b6db917db729da36598606dbb057c46c3ec2654d not found: ID does not exist" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.049155 4681 scope.go:117] "RemoveContainer" containerID="9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.057059 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691"} err="failed to get container status \"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691\": rpc error: code = NotFound desc = could not find container \"9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691\": container with ID starting with 9cdf48d0b3f3f7473153d5da588de2ad1d0098905c2af0c3cb4ea9ee6114e691 not found: ID does not exist" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.104538 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.105335 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.233391 4681 generic.go:334] "Generic (PLEG): container finished" podID="31fd09f2-734b-4427-8b5b-65711b24bbb5" containerID="8ce7b2fe24a9d0caf784ebd0d3fb31784d0f1791fd7508361e3ae865f66ef071" exitCode=0 Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.233520 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qn8qf" event={"ID":"31fd09f2-734b-4427-8b5b-65711b24bbb5","Type":"ContainerDied","Data":"8ce7b2fe24a9d0caf784ebd0d3fb31784d0f1791fd7508361e3ae865f66ef071"} Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.283346 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369392 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42bvd\" (UniqueName: \"kubernetes.io/projected/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-kube-api-access-42bvd\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369442 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-logs\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369630 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-internal-tls-certs\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369697 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-config-data\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369739 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-combined-ca-bundle\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369808 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-scripts\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369916 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.369954 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-httpd-run\") pod \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\" (UID: \"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0\") " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.371175 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-logs" (OuterVolumeSpecName: "logs") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.373812 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.385255 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-scripts" (OuterVolumeSpecName: "scripts") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.386111 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-kube-api-access-42bvd" (OuterVolumeSpecName: "kube-api-access-42bvd") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "kube-api-access-42bvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.388897 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.462874 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.469567 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-config-data" (OuterVolumeSpecName: "config-data") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.482066 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.482161 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.482215 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.482362 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.482423 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.482519 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42bvd\" (UniqueName: \"kubernetes.io/projected/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-kube-api-access-42bvd\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.482607 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.508660 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" (UID: "cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.520095 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.569945 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.585178 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.585203 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:29 crc kubenswrapper[4681]: I1123 06:59:29.895535 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.257787 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0","Type":"ContainerDied","Data":"5158f2960a3633f8ab8ce21ee98e2e3b17f9b7a3902ab303e14f1c3a679d6b9e"} Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.257873 4681 scope.go:117] "RemoveContainer" containerID="52175a25519e28ed0b5115bac7daf9e55d3b943110874c45aa97720963f3e66a" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.258036 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.265621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dd5999bb7-tlr49" event={"ID":"32e94a1b-a08e-4fa2-ae50-f74e280addff","Type":"ContainerStarted","Data":"2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec"} Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.265949 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.276890 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48ecd863-12ce-4eb3-ba76-eea730db3b2d","Type":"ContainerStarted","Data":"97cf1e2c2dc5490b7dccaeb1542e6282ce33d29ac43281d21569cfed720f97eb"} Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.276924 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48ecd863-12ce-4eb3-ba76-eea730db3b2d","Type":"ContainerStarted","Data":"f2f9f7b07d980f839e3bfc610594a28559a73467cc05f9662ef089aa066821df"} Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.284918 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerStarted","Data":"e33218e2cdaab185b40249e2d9e91fa0508971e75bd3af0e4e4904a08838eb75"} Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.286867 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-fbbdq" event={"ID":"00916d9f-8ce3-47d9-a32f-e2deb3514ede","Type":"ContainerStarted","Data":"14bbe87d6009d7ad711ca056bfca4be6c099751bca74e516b1ae4373f0158ce1"} Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.314132 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7dd5999bb7-tlr49" podStartSLOduration=5.314079103 podStartE2EDuration="5.314079103s" podCreationTimestamp="2025-11-23 06:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:30.283135939 +0000 UTC m=+907.352645175" watchObservedRunningTime="2025-11-23 06:59:30.314079103 +0000 UTC m=+907.383588341" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.344580 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.350599 4681 scope.go:117] "RemoveContainer" containerID="1d66757cdfe145f037ebff24dc5730a997fecc45ebc3d2c40ebf4132ec0ffc7e" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.363545 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.377069 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:59:30 crc kubenswrapper[4681]: E1123 06:59:30.377578 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-httpd" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.377598 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-httpd" Nov 23 06:59:30 crc kubenswrapper[4681]: E1123 06:59:30.377619 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-log" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.377627 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-log" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.377784 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-log" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.377808 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" containerName="glance-httpd" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.378786 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.378817 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-fbbdq" podStartSLOduration=3.8507760170000003 podStartE2EDuration="42.378338957s" podCreationTimestamp="2025-11-23 06:58:48 +0000 UTC" firstStartedPulling="2025-11-23 06:58:50.490285156 +0000 UTC m=+867.559794393" lastFinishedPulling="2025-11-23 06:59:29.017848095 +0000 UTC m=+906.087357333" observedRunningTime="2025-11-23 06:59:30.34163093 +0000 UTC m=+907.411140167" watchObservedRunningTime="2025-11-23 06:59:30.378338957 +0000 UTC m=+907.447848194" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.382420 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.386184 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.405150 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.536714 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfr7\" (UniqueName: \"kubernetes.io/projected/1d296ac2-d00b-4a99-94e2-78004337f7e2-kube-api-access-5rfr7\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.536774 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.536858 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.537004 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.537038 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.537060 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.537086 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.537181 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.641015 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfr7\" (UniqueName: \"kubernetes.io/projected/1d296ac2-d00b-4a99-94e2-78004337f7e2-kube-api-access-5rfr7\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.641093 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.641967 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.642105 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.642982 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.643025 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.643045 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.643070 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.643162 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.643819 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.644629 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.648984 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.652117 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.654307 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.657746 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.658496 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfr7\" (UniqueName: \"kubernetes.io/projected/1d296ac2-d00b-4a99-94e2-78004337f7e2-kube-api-access-5rfr7\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.703241 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " pod="openstack/glance-default-internal-api-0" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.738795 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qn8qf" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.743815 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-scripts\") pod \"31fd09f2-734b-4427-8b5b-65711b24bbb5\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.743880 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31fd09f2-734b-4427-8b5b-65711b24bbb5-logs\") pod \"31fd09f2-734b-4427-8b5b-65711b24bbb5\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.743966 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d7fj\" (UniqueName: \"kubernetes.io/projected/31fd09f2-734b-4427-8b5b-65711b24bbb5-kube-api-access-2d7fj\") pod \"31fd09f2-734b-4427-8b5b-65711b24bbb5\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.744020 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-combined-ca-bundle\") pod \"31fd09f2-734b-4427-8b5b-65711b24bbb5\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.744048 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-config-data\") pod \"31fd09f2-734b-4427-8b5b-65711b24bbb5\" (UID: \"31fd09f2-734b-4427-8b5b-65711b24bbb5\") " Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.745449 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fd09f2-734b-4427-8b5b-65711b24bbb5-logs" (OuterVolumeSpecName: "logs") pod "31fd09f2-734b-4427-8b5b-65711b24bbb5" (UID: "31fd09f2-734b-4427-8b5b-65711b24bbb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.750578 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-scripts" (OuterVolumeSpecName: "scripts") pod "31fd09f2-734b-4427-8b5b-65711b24bbb5" (UID: "31fd09f2-734b-4427-8b5b-65711b24bbb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.751565 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fd09f2-734b-4427-8b5b-65711b24bbb5-kube-api-access-2d7fj" (OuterVolumeSpecName: "kube-api-access-2d7fj") pod "31fd09f2-734b-4427-8b5b-65711b24bbb5" (UID: "31fd09f2-734b-4427-8b5b-65711b24bbb5"). InnerVolumeSpecName "kube-api-access-2d7fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.783420 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-config-data" (OuterVolumeSpecName: "config-data") pod "31fd09f2-734b-4427-8b5b-65711b24bbb5" (UID: "31fd09f2-734b-4427-8b5b-65711b24bbb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.786249 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31fd09f2-734b-4427-8b5b-65711b24bbb5" (UID: "31fd09f2-734b-4427-8b5b-65711b24bbb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.845823 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d7fj\" (UniqueName: \"kubernetes.io/projected/31fd09f2-734b-4427-8b5b-65711b24bbb5-kube-api-access-2d7fj\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.845862 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.845872 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.845881 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31fd09f2-734b-4427-8b5b-65711b24bbb5-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4681]: I1123 06:59:30.845889 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31fd09f2-734b-4427-8b5b-65711b24bbb5-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.002987 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.290216 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0" path="/var/lib/kubelet/pods/cc8fc37d-ed28-4d24-8d09-aa94e5f7eaa0/volumes" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.332637 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48ecd863-12ce-4eb3-ba76-eea730db3b2d","Type":"ContainerStarted","Data":"884e2a56b0230e733fad802ba25fca0312606baab59fc36a9a13c7175936d99a"} Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.340317 4681 generic.go:334] "Generic (PLEG): container finished" podID="5dd5ce32-831b-448a-943f-7e3250ca172b" containerID="fbfbecec9249e290de376cecaf8ce397d63bedb12a63815fac8bc51df3bfbd1f" exitCode=0 Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.340393 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8mwm" event={"ID":"5dd5ce32-831b-448a-943f-7e3250ca172b","Type":"ContainerDied","Data":"fbfbecec9249e290de376cecaf8ce397d63bedb12a63815fac8bc51df3bfbd1f"} Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.357949 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qn8qf" event={"ID":"31fd09f2-734b-4427-8b5b-65711b24bbb5","Type":"ContainerDied","Data":"8eb2cbd5eb4dd21a23f6354b3f6c5ac0e4abb7eb7531c4a4fbdf907c6d452f31"} Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.357982 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb2cbd5eb4dd21a23f6354b3f6c5ac0e4abb7eb7531c4a4fbdf907c6d452f31" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.357961 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qn8qf" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.402789 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66f57f4546-f9rcd"] Nov 23 06:59:31 crc kubenswrapper[4681]: E1123 06:59:31.403291 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31fd09f2-734b-4427-8b5b-65711b24bbb5" containerName="placement-db-sync" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.403312 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="31fd09f2-734b-4427-8b5b-65711b24bbb5" containerName="placement-db-sync" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.403562 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="31fd09f2-734b-4427-8b5b-65711b24bbb5" containerName="placement-db-sync" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.406384 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.411098 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.411058045 podStartE2EDuration="4.411058045s" podCreationTimestamp="2025-11-23 06:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:31.38673316 +0000 UTC m=+908.456242398" watchObservedRunningTime="2025-11-23 06:59:31.411058045 +0000 UTC m=+908.480567283" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.411602 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.411833 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6dmm8" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.411961 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.411963 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.418994 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.423982 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66f57f4546-f9rcd"] Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.571448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-logs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.571546 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-config-data\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.571665 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stvpq\" (UniqueName: \"kubernetes.io/projected/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-kube-api-access-stvpq\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.571712 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-public-tls-certs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.571735 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-combined-ca-bundle\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.571838 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-internal-tls-certs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.571893 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-scripts\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.603046 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 06:59:31 crc kubenswrapper[4681]: W1123 06:59:31.645536 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d296ac2_d00b_4a99_94e2_78004337f7e2.slice/crio-9a29efadfd8cddf7ee7697bb02de14d9f0513c47276e035dd35c3fdaa60bb283 WatchSource:0}: Error finding container 9a29efadfd8cddf7ee7697bb02de14d9f0513c47276e035dd35c3fdaa60bb283: Status 404 returned error can't find the container with id 9a29efadfd8cddf7ee7697bb02de14d9f0513c47276e035dd35c3fdaa60bb283 Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.673584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stvpq\" (UniqueName: \"kubernetes.io/projected/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-kube-api-access-stvpq\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.673906 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-public-tls-certs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.673941 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-combined-ca-bundle\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.674647 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-internal-tls-certs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.674689 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-scripts\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.674722 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-logs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.674754 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-config-data\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.675633 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-logs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.684942 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-scripts\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.685276 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-combined-ca-bundle\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.685372 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-public-tls-certs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.685923 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-config-data\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.693506 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-internal-tls-certs\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.710597 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stvpq\" (UniqueName: \"kubernetes.io/projected/6f60731b-ffd1-40de-8d08-2f3d17e4db9d-kube-api-access-stvpq\") pod \"placement-66f57f4546-f9rcd\" (UID: \"6f60731b-ffd1-40de-8d08-2f3d17e4db9d\") " pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:31 crc kubenswrapper[4681]: I1123 06:59:31.724677 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:32 crc kubenswrapper[4681]: I1123 06:59:32.272096 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:59:32 crc kubenswrapper[4681]: I1123 06:59:32.307869 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66f57f4546-f9rcd"] Nov 23 06:59:32 crc kubenswrapper[4681]: I1123 06:59:32.391116 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d296ac2-d00b-4a99-94e2-78004337f7e2","Type":"ContainerStarted","Data":"9a29efadfd8cddf7ee7697bb02de14d9f0513c47276e035dd35c3fdaa60bb283"} Nov 23 06:59:32 crc kubenswrapper[4681]: I1123 06:59:32.394796 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f57f4546-f9rcd" event={"ID":"6f60731b-ffd1-40de-8d08-2f3d17e4db9d","Type":"ContainerStarted","Data":"57cbb5644d86bd5af6f3d4644dd8c509ad4424049662de747719b714b6a969c6"} Nov 23 06:59:32 crc kubenswrapper[4681]: I1123 06:59:32.967128 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.132779 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2dqc\" (UniqueName: \"kubernetes.io/projected/5dd5ce32-831b-448a-943f-7e3250ca172b-kube-api-access-g2dqc\") pod \"5dd5ce32-831b-448a-943f-7e3250ca172b\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.132823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-fernet-keys\") pod \"5dd5ce32-831b-448a-943f-7e3250ca172b\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.132878 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-credential-keys\") pod \"5dd5ce32-831b-448a-943f-7e3250ca172b\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.132909 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-config-data\") pod \"5dd5ce32-831b-448a-943f-7e3250ca172b\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.133047 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-scripts\") pod \"5dd5ce32-831b-448a-943f-7e3250ca172b\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.133122 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-combined-ca-bundle\") pod \"5dd5ce32-831b-448a-943f-7e3250ca172b\" (UID: \"5dd5ce32-831b-448a-943f-7e3250ca172b\") " Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.210791 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5dd5ce32-831b-448a-943f-7e3250ca172b" (UID: "5dd5ce32-831b-448a-943f-7e3250ca172b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.210991 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-scripts" (OuterVolumeSpecName: "scripts") pod "5dd5ce32-831b-448a-943f-7e3250ca172b" (UID: "5dd5ce32-831b-448a-943f-7e3250ca172b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.211179 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd5ce32-831b-448a-943f-7e3250ca172b-kube-api-access-g2dqc" (OuterVolumeSpecName: "kube-api-access-g2dqc") pod "5dd5ce32-831b-448a-943f-7e3250ca172b" (UID: "5dd5ce32-831b-448a-943f-7e3250ca172b"). InnerVolumeSpecName "kube-api-access-g2dqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.214033 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5dd5ce32-831b-448a-943f-7e3250ca172b" (UID: "5dd5ce32-831b-448a-943f-7e3250ca172b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.216600 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-config-data" (OuterVolumeSpecName: "config-data") pod "5dd5ce32-831b-448a-943f-7e3250ca172b" (UID: "5dd5ce32-831b-448a-943f-7e3250ca172b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.238901 4681 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.238934 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.238945 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.238955 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2dqc\" (UniqueName: \"kubernetes.io/projected/5dd5ce32-831b-448a-943f-7e3250ca172b-kube-api-access-g2dqc\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.238966 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.246800 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dd5ce32-831b-448a-943f-7e3250ca172b" (UID: "5dd5ce32-831b-448a-943f-7e3250ca172b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.367862 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd5ce32-831b-448a-943f-7e3250ca172b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.397638 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.469498 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d296ac2-d00b-4a99-94e2-78004337f7e2","Type":"ContainerStarted","Data":"09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105"} Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.505032 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d78ff46f5-xfmdq"] Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.505438 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" podUID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerName="dnsmasq-dns" containerID="cri-o://8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819" gracePeriod=10 Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.517501 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8mwm" event={"ID":"5dd5ce32-831b-448a-943f-7e3250ca172b","Type":"ContainerDied","Data":"865e0b92fed9126dd7c914380d3e1401bbe91a07a8233d337e5b808de2ace840"} Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.517583 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865e0b92fed9126dd7c914380d3e1401bbe91a07a8233d337e5b808de2ace840" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.517677 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8mwm" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.600811 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54b559f6bf-jcd2p"] Nov 23 06:59:33 crc kubenswrapper[4681]: E1123 06:59:33.601327 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd5ce32-831b-448a-943f-7e3250ca172b" containerName="keystone-bootstrap" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.601422 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd5ce32-831b-448a-943f-7e3250ca172b" containerName="keystone-bootstrap" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.601692 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd5ce32-831b-448a-943f-7e3250ca172b" containerName="keystone-bootstrap" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.602384 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.608036 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.608109 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.608342 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.608376 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.608496 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.608944 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k72qg" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.639673 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54b559f6bf-jcd2p"] Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.790837 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbs5\" (UniqueName: \"kubernetes.io/projected/c70eedef-81d9-41d8-a2ee-93abe69aa20a-kube-api-access-bfbs5\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.790898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-fernet-keys\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.790980 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-scripts\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.791056 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-credential-keys\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.791115 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-combined-ca-bundle\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.791258 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-internal-tls-certs\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.791409 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-config-data\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.791514 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-public-tls-certs\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.912750 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-fernet-keys\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.913850 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbs5\" (UniqueName: \"kubernetes.io/projected/c70eedef-81d9-41d8-a2ee-93abe69aa20a-kube-api-access-bfbs5\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.913908 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-scripts\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.913963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-credential-keys\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.914009 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-combined-ca-bundle\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.914026 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-internal-tls-certs\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.914098 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-config-data\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.914191 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-public-tls-certs\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.920132 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-combined-ca-bundle\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.922573 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-fernet-keys\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.930799 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-internal-tls-certs\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.938322 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-scripts\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.938772 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbs5\" (UniqueName: \"kubernetes.io/projected/c70eedef-81d9-41d8-a2ee-93abe69aa20a-kube-api-access-bfbs5\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.940209 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-config-data\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.942899 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-credential-keys\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:33 crc kubenswrapper[4681]: I1123 06:59:33.948881 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c70eedef-81d9-41d8-a2ee-93abe69aa20a-public-tls-certs\") pod \"keystone-54b559f6bf-jcd2p\" (UID: \"c70eedef-81d9-41d8-a2ee-93abe69aa20a\") " pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.237572 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.483795 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.528244 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-swift-storage-0\") pod \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.528297 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56pt2\" (UniqueName: \"kubernetes.io/projected/8b9d5ea3-e589-4578-b37b-59e1690b4d34-kube-api-access-56pt2\") pod \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.528421 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-config\") pod \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.539869 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-svc\") pod \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.539916 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-sb\") pod \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.540017 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-nb\") pod \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\" (UID: \"8b9d5ea3-e589-4578-b37b-59e1690b4d34\") " Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.559712 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9d5ea3-e589-4578-b37b-59e1690b4d34-kube-api-access-56pt2" (OuterVolumeSpecName: "kube-api-access-56pt2") pod "8b9d5ea3-e589-4578-b37b-59e1690b4d34" (UID: "8b9d5ea3-e589-4578-b37b-59e1690b4d34"). InnerVolumeSpecName "kube-api-access-56pt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.579784 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d296ac2-d00b-4a99-94e2-78004337f7e2","Type":"ContainerStarted","Data":"282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4"} Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.584873 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f57f4546-f9rcd" event={"ID":"6f60731b-ffd1-40de-8d08-2f3d17e4db9d","Type":"ContainerStarted","Data":"8c69f672eae3690f0ba0240bd38bc3d0e65bcad9587e2771aae893973e9310e3"} Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.584923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f57f4546-f9rcd" event={"ID":"6f60731b-ffd1-40de-8d08-2f3d17e4db9d","Type":"ContainerStarted","Data":"920acaa2af8162ed32d655cc9aab9124bc2796cdfae9b246499b68de16a3c382"} Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.585587 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.585698 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66f57f4546-f9rcd" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.607222 4681 generic.go:334] "Generic (PLEG): container finished" podID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerID="8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819" exitCode=0 Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.607276 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" event={"ID":"8b9d5ea3-e589-4578-b37b-59e1690b4d34","Type":"ContainerDied","Data":"8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819"} Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.607302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" event={"ID":"8b9d5ea3-e589-4578-b37b-59e1690b4d34","Type":"ContainerDied","Data":"833f645e7cbf56efda1024f8b536df1f35393fcddcc6916607271a6b8be465de"} Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.607320 4681 scope.go:117] "RemoveContainer" containerID="8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.607445 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d78ff46f5-xfmdq" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.607962 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.607940848 podStartE2EDuration="4.607940848s" podCreationTimestamp="2025-11-23 06:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:34.607611045 +0000 UTC m=+911.677120282" watchObservedRunningTime="2025-11-23 06:59:34.607940848 +0000 UTC m=+911.677450085" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.647279 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56pt2\" (UniqueName: \"kubernetes.io/projected/8b9d5ea3-e589-4578-b37b-59e1690b4d34-kube-api-access-56pt2\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.663579 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66f57f4546-f9rcd" podStartSLOduration=3.663564515 podStartE2EDuration="3.663564515s" podCreationTimestamp="2025-11-23 06:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:34.641916268 +0000 UTC m=+911.711425505" watchObservedRunningTime="2025-11-23 06:59:34.663564515 +0000 UTC m=+911.733073753" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.674513 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8b9d5ea3-e589-4578-b37b-59e1690b4d34" (UID: "8b9d5ea3-e589-4578-b37b-59e1690b4d34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.689622 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b9d5ea3-e589-4578-b37b-59e1690b4d34" (UID: "8b9d5ea3-e589-4578-b37b-59e1690b4d34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.698840 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b9d5ea3-e589-4578-b37b-59e1690b4d34" (UID: "8b9d5ea3-e589-4578-b37b-59e1690b4d34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.725085 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-config" (OuterVolumeSpecName: "config") pod "8b9d5ea3-e589-4578-b37b-59e1690b4d34" (UID: "8b9d5ea3-e589-4578-b37b-59e1690b4d34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.727111 4681 scope.go:117] "RemoveContainer" containerID="428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.735923 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b9d5ea3-e589-4578-b37b-59e1690b4d34" (UID: "8b9d5ea3-e589-4578-b37b-59e1690b4d34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.749604 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.749631 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.749644 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.749658 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.749668 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b9d5ea3-e589-4578-b37b-59e1690b4d34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.784162 4681 scope.go:117] "RemoveContainer" containerID="8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819" Nov 23 06:59:34 crc kubenswrapper[4681]: E1123 06:59:34.785040 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819\": container with ID starting with 8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819 not found: ID does not exist" containerID="8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.785077 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819"} err="failed to get container status \"8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819\": rpc error: code = NotFound desc = could not find container \"8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819\": container with ID starting with 8fac2f4a9e7d4a712de5b456d2b20aeee1c3ee8e0374bada957a2bb59e642819 not found: ID does not exist" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.785128 4681 scope.go:117] "RemoveContainer" containerID="428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab" Nov 23 06:59:34 crc kubenswrapper[4681]: E1123 06:59:34.785675 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab\": container with ID starting with 428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab not found: ID does not exist" containerID="428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.785711 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab"} err="failed to get container status \"428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab\": rpc error: code = NotFound desc = could not find container \"428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab\": container with ID starting with 428e1ede2e12cbecdcf00c415c8f48c73758ae1595b0268141534cf1479164ab not found: ID does not exist" Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.962490 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d78ff46f5-xfmdq"] Nov 23 06:59:34 crc kubenswrapper[4681]: I1123 06:59:34.998834 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d78ff46f5-xfmdq"] Nov 23 06:59:35 crc kubenswrapper[4681]: I1123 06:59:35.036746 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54b559f6bf-jcd2p"] Nov 23 06:59:35 crc kubenswrapper[4681]: E1123 06:59:35.190097 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b9d5ea3_e589_4578_b37b_59e1690b4d34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b9d5ea3_e589_4578_b37b_59e1690b4d34.slice/crio-833f645e7cbf56efda1024f8b536df1f35393fcddcc6916607271a6b8be465de\": RecentStats: unable to find data in memory cache]" Nov 23 06:59:35 crc kubenswrapper[4681]: I1123 06:59:35.279380 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" path="/var/lib/kubelet/pods/8b9d5ea3-e589-4578-b37b-59e1690b4d34/volumes" Nov 23 06:59:35 crc kubenswrapper[4681]: I1123 06:59:35.627575 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54b559f6bf-jcd2p" event={"ID":"c70eedef-81d9-41d8-a2ee-93abe69aa20a","Type":"ContainerStarted","Data":"f43c85cd9d0ca94d1a81886c5df5587d8a5ffc0eab0fa1f0ad4c033dfff87d15"} Nov 23 06:59:35 crc kubenswrapper[4681]: I1123 06:59:35.627641 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54b559f6bf-jcd2p" event={"ID":"c70eedef-81d9-41d8-a2ee-93abe69aa20a","Type":"ContainerStarted","Data":"28d71bf8f74fa2e47992c5a9d5332dd43c2ebd0ac2650f5db01b35f95894fbb1"} Nov 23 06:59:35 crc kubenswrapper[4681]: I1123 06:59:35.628294 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 06:59:35 crc kubenswrapper[4681]: I1123 06:59:35.680218 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54b559f6bf-jcd2p" podStartSLOduration=2.68019836 podStartE2EDuration="2.68019836s" podCreationTimestamp="2025-11-23 06:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:35.669418707 +0000 UTC m=+912.738927943" watchObservedRunningTime="2025-11-23 06:59:35.68019836 +0000 UTC m=+912.749707597" Nov 23 06:59:37 crc kubenswrapper[4681]: I1123 06:59:37.626665 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 06:59:37 crc kubenswrapper[4681]: I1123 06:59:37.626990 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 06:59:37 crc kubenswrapper[4681]: I1123 06:59:37.704444 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 06:59:37 crc kubenswrapper[4681]: I1123 06:59:37.720657 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 06:59:38 crc kubenswrapper[4681]: I1123 06:59:38.675377 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 06:59:38 crc kubenswrapper[4681]: I1123 06:59:38.675673 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 06:59:39 crc kubenswrapper[4681]: I1123 06:59:39.043632 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c48d564b8-5tf9h" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Nov 23 06:59:39 crc kubenswrapper[4681]: I1123 06:59:39.104532 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fcdb4576d-g8stp" podUID="bdfa433c-2b77-4373-877f-5c92a2b39fb8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Nov 23 06:59:39 crc kubenswrapper[4681]: I1123 06:59:39.697384 4681 generic.go:334] "Generic (PLEG): container finished" podID="00916d9f-8ce3-47d9-a32f-e2deb3514ede" containerID="14bbe87d6009d7ad711ca056bfca4be6c099751bca74e516b1ae4373f0158ce1" exitCode=0 Nov 23 06:59:39 crc kubenswrapper[4681]: I1123 06:59:39.698697 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-fbbdq" event={"ID":"00916d9f-8ce3-47d9-a32f-e2deb3514ede","Type":"ContainerDied","Data":"14bbe87d6009d7ad711ca056bfca4be6c099751bca74e516b1ae4373f0158ce1"} Nov 23 06:59:40 crc kubenswrapper[4681]: I1123 06:59:40.706221 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:59:40 crc kubenswrapper[4681]: I1123 06:59:40.706261 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:59:41 crc kubenswrapper[4681]: I1123 06:59:41.003316 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:41 crc kubenswrapper[4681]: I1123 06:59:41.004046 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:41 crc kubenswrapper[4681]: I1123 06:59:41.044311 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:41 crc kubenswrapper[4681]: I1123 06:59:41.047930 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:41 crc kubenswrapper[4681]: I1123 06:59:41.719533 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:41 crc kubenswrapper[4681]: I1123 06:59:41.719590 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.092137 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.092498 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.299024 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.299098 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.311853 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.730526 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-fbbdq" event={"ID":"00916d9f-8ce3-47d9-a32f-e2deb3514ede","Type":"ContainerDied","Data":"a5206e169df0b5439eb3755e864da8c406e02bbc49affcc1a40636aeb5c6d317"} Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.730582 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5206e169df0b5439eb3755e864da8c406e02bbc49affcc1a40636aeb5c6d317" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.759100 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-fbbdq" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.876572 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-config-data\") pod \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.876835 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-combined-ca-bundle\") pod \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.876950 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-426hm\" (UniqueName: \"kubernetes.io/projected/00916d9f-8ce3-47d9-a32f-e2deb3514ede-kube-api-access-426hm\") pod \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\" (UID: \"00916d9f-8ce3-47d9-a32f-e2deb3514ede\") " Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.902075 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00916d9f-8ce3-47d9-a32f-e2deb3514ede-kube-api-access-426hm" (OuterVolumeSpecName: "kube-api-access-426hm") pod "00916d9f-8ce3-47d9-a32f-e2deb3514ede" (UID: "00916d9f-8ce3-47d9-a32f-e2deb3514ede"). InnerVolumeSpecName "kube-api-access-426hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.950859 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00916d9f-8ce3-47d9-a32f-e2deb3514ede" (UID: "00916d9f-8ce3-47d9-a32f-e2deb3514ede"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.980547 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:42 crc kubenswrapper[4681]: I1123 06:59:42.980591 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-426hm\" (UniqueName: \"kubernetes.io/projected/00916d9f-8ce3-47d9-a32f-e2deb3514ede-kube-api-access-426hm\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.018611 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-config-data" (OuterVolumeSpecName: "config-data") pod "00916d9f-8ce3-47d9-a32f-e2deb3514ede" (UID: "00916d9f-8ce3-47d9-a32f-e2deb3514ede"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.084433 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00916d9f-8ce3-47d9-a32f-e2deb3514ede-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.755669 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-frn6w" event={"ID":"95e9b025-0fa7-4a41-a18c-e4f078b82c43","Type":"ContainerStarted","Data":"d9b70e48c34aa62c0c87f47450bc2d43c1752010c23fc9f615afbd1eaf7f6873"} Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.774969 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4gs5w" event={"ID":"d426ed81-18f9-441e-9865-b9a6d683931f","Type":"ContainerStarted","Data":"69c8d2488fbc645452db6c62e7d3c880fa1d9652016d1c23f5b202c3444a34bd"} Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.780526 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-frn6w" podStartSLOduration=3.060145228 podStartE2EDuration="54.780512382s" podCreationTimestamp="2025-11-23 06:58:49 +0000 UTC" firstStartedPulling="2025-11-23 06:58:51.148319816 +0000 UTC m=+868.217829053" lastFinishedPulling="2025-11-23 06:59:42.86868697 +0000 UTC m=+919.938196207" observedRunningTime="2025-11-23 06:59:43.774857681 +0000 UTC m=+920.844366907" watchObservedRunningTime="2025-11-23 06:59:43.780512382 +0000 UTC m=+920.850021620" Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.793620 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-fbbdq" Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.794292 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerStarted","Data":"0ff583c9c29e694a7b38e9392f3f10523c52cac8769bcde58a4b7e50ffde47c2"} Nov 23 06:59:43 crc kubenswrapper[4681]: I1123 06:59:43.811379 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4gs5w" podStartSLOduration=3.853375608 podStartE2EDuration="55.811360371s" podCreationTimestamp="2025-11-23 06:58:48 +0000 UTC" firstStartedPulling="2025-11-23 06:58:50.914623547 +0000 UTC m=+867.984132775" lastFinishedPulling="2025-11-23 06:59:42.872608301 +0000 UTC m=+919.942117538" observedRunningTime="2025-11-23 06:59:43.791438232 +0000 UTC m=+920.860947479" watchObservedRunningTime="2025-11-23 06:59:43.811360371 +0000 UTC m=+920.880869608" Nov 23 06:59:44 crc kubenswrapper[4681]: I1123 06:59:44.108548 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:44 crc kubenswrapper[4681]: I1123 06:59:44.108961 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:59:44 crc kubenswrapper[4681]: I1123 06:59:44.109644 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 06:59:46 crc kubenswrapper[4681]: I1123 06:59:46.828520 4681 generic.go:334] "Generic (PLEG): container finished" podID="95e9b025-0fa7-4a41-a18c-e4f078b82c43" containerID="d9b70e48c34aa62c0c87f47450bc2d43c1752010c23fc9f615afbd1eaf7f6873" exitCode=0 Nov 23 06:59:46 crc kubenswrapper[4681]: I1123 06:59:46.828612 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-frn6w" event={"ID":"95e9b025-0fa7-4a41-a18c-e4f078b82c43","Type":"ContainerDied","Data":"d9b70e48c34aa62c0c87f47450bc2d43c1752010c23fc9f615afbd1eaf7f6873"} Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.264765 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-frn6w" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.317908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-db-sync-config-data\") pod \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.318080 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-combined-ca-bundle\") pod \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.318204 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjdl9\" (UniqueName: \"kubernetes.io/projected/95e9b025-0fa7-4a41-a18c-e4f078b82c43-kube-api-access-jjdl9\") pod \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\" (UID: \"95e9b025-0fa7-4a41-a18c-e4f078b82c43\") " Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.327756 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "95e9b025-0fa7-4a41-a18c-e4f078b82c43" (UID: "95e9b025-0fa7-4a41-a18c-e4f078b82c43"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.342033 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e9b025-0fa7-4a41-a18c-e4f078b82c43-kube-api-access-jjdl9" (OuterVolumeSpecName: "kube-api-access-jjdl9") pod "95e9b025-0fa7-4a41-a18c-e4f078b82c43" (UID: "95e9b025-0fa7-4a41-a18c-e4f078b82c43"). InnerVolumeSpecName "kube-api-access-jjdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.386445 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95e9b025-0fa7-4a41-a18c-e4f078b82c43" (UID: "95e9b025-0fa7-4a41-a18c-e4f078b82c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.423039 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.423167 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjdl9\" (UniqueName: \"kubernetes.io/projected/95e9b025-0fa7-4a41-a18c-e4f078b82c43-kube-api-access-jjdl9\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.423237 4681 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95e9b025-0fa7-4a41-a18c-e4f078b82c43-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.855135 4681 generic.go:334] "Generic (PLEG): container finished" podID="d426ed81-18f9-441e-9865-b9a6d683931f" containerID="69c8d2488fbc645452db6c62e7d3c880fa1d9652016d1c23f5b202c3444a34bd" exitCode=0 Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.855192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4gs5w" event={"ID":"d426ed81-18f9-441e-9865-b9a6d683931f","Type":"ContainerDied","Data":"69c8d2488fbc645452db6c62e7d3c880fa1d9652016d1c23f5b202c3444a34bd"} Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.858674 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-frn6w" event={"ID":"95e9b025-0fa7-4a41-a18c-e4f078b82c43","Type":"ContainerDied","Data":"f3950d753520416d0adbfcd1dc0bbf7e068d5ffea5ef72d1e591d0ea0e41476e"} Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.858723 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3950d753520416d0adbfcd1dc0bbf7e068d5ffea5ef72d1e591d0ea0e41476e" Nov 23 06:59:48 crc kubenswrapper[4681]: I1123 06:59:48.858699 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-frn6w" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.042800 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c48d564b8-5tf9h" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.102897 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fcdb4576d-g8stp" podUID="bdfa433c-2b77-4373-877f-5c92a2b39fb8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.190892 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7fdd9d555f-qrq8m"] Nov 23 06:59:49 crc kubenswrapper[4681]: E1123 06:59:49.191395 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e9b025-0fa7-4a41-a18c-e4f078b82c43" containerName="barbican-db-sync" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.191415 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e9b025-0fa7-4a41-a18c-e4f078b82c43" containerName="barbican-db-sync" Nov 23 06:59:49 crc kubenswrapper[4681]: E1123 06:59:49.191442 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerName="dnsmasq-dns" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.191449 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerName="dnsmasq-dns" Nov 23 06:59:49 crc kubenswrapper[4681]: E1123 06:59:49.191488 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00916d9f-8ce3-47d9-a32f-e2deb3514ede" containerName="heat-db-sync" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.191495 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="00916d9f-8ce3-47d9-a32f-e2deb3514ede" containerName="heat-db-sync" Nov 23 06:59:49 crc kubenswrapper[4681]: E1123 06:59:49.191509 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerName="init" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.191514 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerName="init" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.191734 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e9b025-0fa7-4a41-a18c-e4f078b82c43" containerName="barbican-db-sync" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.191755 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="00916d9f-8ce3-47d9-a32f-e2deb3514ede" containerName="heat-db-sync" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.191776 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9d5ea3-e589-4578-b37b-59e1690b4d34" containerName="dnsmasq-dns" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.192842 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.197053 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.197213 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l56nd" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.203635 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.245535 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7849bb5f4-b6pjl"] Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.247228 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.249470 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-combined-ca-bundle\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.249522 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxpr\" (UniqueName: \"kubernetes.io/projected/c1a71cfb-fb6a-458b-875d-7beebe8dc444-kube-api-access-zcxpr\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.249633 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a71cfb-fb6a-458b-875d-7beebe8dc444-logs\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.249730 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-config-data-custom\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.249754 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-config-data\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.255941 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.278490 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7fdd9d555f-qrq8m"] Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.311583 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-597c64895-s6nch"] Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.313236 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.346551 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7849bb5f4-b6pjl"] Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356263 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcxpr\" (UniqueName: \"kubernetes.io/projected/c1a71cfb-fb6a-458b-875d-7beebe8dc444-kube-api-access-zcxpr\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356375 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-config-data-custom\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-config-data\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356439 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a71cfb-fb6a-458b-875d-7beebe8dc444-logs\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356506 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctrr\" (UniqueName: \"kubernetes.io/projected/1f1bbb85-4938-4e69-b236-9c7b17a4636f-kube-api-access-kctrr\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356528 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-svc\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356921 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a71cfb-fb6a-458b-875d-7beebe8dc444-logs\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.356975 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqp98\" (UniqueName: \"kubernetes.io/projected/23e6ab25-e753-4758-a79d-f89855309d8d-kube-api-access-fqp98\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357068 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-config-data-custom\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357087 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-nb\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357106 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-config-data\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357244 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f1bbb85-4938-4e69-b236-9c7b17a4636f-logs\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357263 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-combined-ca-bundle\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357301 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-sb\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357342 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-config\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357382 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-swift-storage-0\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.357399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-combined-ca-bundle\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.365010 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-config-data-custom\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.368229 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-combined-ca-bundle\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.376141 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a71cfb-fb6a-458b-875d-7beebe8dc444-config-data\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.395579 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcxpr\" (UniqueName: \"kubernetes.io/projected/c1a71cfb-fb6a-458b-875d-7beebe8dc444-kube-api-access-zcxpr\") pod \"barbican-worker-7fdd9d555f-qrq8m\" (UID: \"c1a71cfb-fb6a-458b-875d-7beebe8dc444\") " pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.441520 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-597c64895-s6nch"] Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.459934 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-config-data-custom\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460069 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-config-data\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460189 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kctrr\" (UniqueName: \"kubernetes.io/projected/1f1bbb85-4938-4e69-b236-9c7b17a4636f-kube-api-access-kctrr\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460269 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-svc\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460346 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqp98\" (UniqueName: \"kubernetes.io/projected/23e6ab25-e753-4758-a79d-f89855309d8d-kube-api-access-fqp98\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460445 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-nb\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460639 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f1bbb85-4938-4e69-b236-9c7b17a4636f-logs\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460697 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-combined-ca-bundle\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460772 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-sb\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460848 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-config\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.460933 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-swift-storage-0\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.461805 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-swift-storage-0\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.463122 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-nb\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.464042 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-svc\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.464775 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f1bbb85-4938-4e69-b236-9c7b17a4636f-logs\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.465367 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-sb\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.465867 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-config\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.471068 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-combined-ca-bundle\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.495093 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-config-data\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.495470 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f1bbb85-4938-4e69-b236-9c7b17a4636f-config-data-custom\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.497951 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqp98\" (UniqueName: \"kubernetes.io/projected/23e6ab25-e753-4758-a79d-f89855309d8d-kube-api-access-fqp98\") pod \"dnsmasq-dns-597c64895-s6nch\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.516836 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fdd9d555f-qrq8m" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.545639 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kctrr\" (UniqueName: \"kubernetes.io/projected/1f1bbb85-4938-4e69-b236-9c7b17a4636f-kube-api-access-kctrr\") pod \"barbican-keystone-listener-7849bb5f4-b6pjl\" (UID: \"1f1bbb85-4938-4e69-b236-9c7b17a4636f\") " pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.549042 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bb6dddd54-bttkq"] Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.554272 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.556800 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bb6dddd54-bttkq"] Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.557221 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.574115 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.637280 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.676615 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-combined-ca-bundle\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.676715 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.676791 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97338e7f-0f80-4f47-905f-59df8aef837b-logs\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.676820 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data-custom\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.676857 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pt5w\" (UniqueName: \"kubernetes.io/projected/97338e7f-0f80-4f47-905f-59df8aef837b-kube-api-access-7pt5w\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.778897 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-combined-ca-bundle\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.779246 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.779491 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97338e7f-0f80-4f47-905f-59df8aef837b-logs\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.779624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data-custom\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.779754 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pt5w\" (UniqueName: \"kubernetes.io/projected/97338e7f-0f80-4f47-905f-59df8aef837b-kube-api-access-7pt5w\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.779939 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97338e7f-0f80-4f47-905f-59df8aef837b-logs\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.787385 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-combined-ca-bundle\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.787885 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.788426 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data-custom\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.795978 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pt5w\" (UniqueName: \"kubernetes.io/projected/97338e7f-0f80-4f47-905f-59df8aef837b-kube-api-access-7pt5w\") pod \"barbican-api-6bb6dddd54-bttkq\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:49 crc kubenswrapper[4681]: I1123 06:59:49.901350 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:52 crc kubenswrapper[4681]: I1123 06:59:52.926218 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b7dd84c8b-57zgx"] Nov 23 06:59:52 crc kubenswrapper[4681]: I1123 06:59:52.928134 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:52 crc kubenswrapper[4681]: I1123 06:59:52.934742 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 23 06:59:52 crc kubenswrapper[4681]: I1123 06:59:52.943438 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 23 06:59:52 crc kubenswrapper[4681]: I1123 06:59:52.957555 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b7dd84c8b-57zgx"] Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.079972 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-public-tls-certs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.080101 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-config-data-custom\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.080167 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-internal-tls-certs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.080202 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh5nv\" (UniqueName: \"kubernetes.io/projected/69085e5b-69b6-421a-aaa2-066bb27620d1-kube-api-access-zh5nv\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.080352 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-combined-ca-bundle\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.080387 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-config-data\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.080469 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69085e5b-69b6-421a-aaa2-066bb27620d1-logs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.157846 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-759dcb765b-std9h" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.182227 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-combined-ca-bundle\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.183321 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-config-data\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.183773 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69085e5b-69b6-421a-aaa2-066bb27620d1-logs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.183904 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-public-tls-certs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.183993 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-config-data-custom\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.184067 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-internal-tls-certs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.184193 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh5nv\" (UniqueName: \"kubernetes.io/projected/69085e5b-69b6-421a-aaa2-066bb27620d1-kube-api-access-zh5nv\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.184203 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69085e5b-69b6-421a-aaa2-066bb27620d1-logs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.194152 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-config-data-custom\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.202570 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-public-tls-certs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.203925 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-config-data\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.224103 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-combined-ca-bundle\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.242613 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69085e5b-69b6-421a-aaa2-066bb27620d1-internal-tls-certs\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.252866 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh5nv\" (UniqueName: \"kubernetes.io/projected/69085e5b-69b6-421a-aaa2-066bb27620d1-kube-api-access-zh5nv\") pod \"barbican-api-6b7dd84c8b-57zgx\" (UID: \"69085e5b-69b6-421a-aaa2-066bb27620d1\") " pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:53 crc kubenswrapper[4681]: I1123 06:59:53.551370 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.849201 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.955990 4681 generic.go:334] "Generic (PLEG): container finished" podID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerID="da1db64be7e782ae46e4c5e141005925fec41eddf0809a920f923eafae375c41" exitCode=137 Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.956020 4681 generic.go:334] "Generic (PLEG): container finished" podID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerID="405fcd023d71625841d6784694bca0ce578e0b3ccf94cb59330d90d166178b17" exitCode=137 Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.956143 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845ccd5479-79qz5" event={"ID":"2f95ab62-e0ad-4566-bbfd-29e2ad374edf","Type":"ContainerDied","Data":"da1db64be7e782ae46e4c5e141005925fec41eddf0809a920f923eafae375c41"} Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.956182 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845ccd5479-79qz5" event={"ID":"2f95ab62-e0ad-4566-bbfd-29e2ad374edf","Type":"ContainerDied","Data":"405fcd023d71625841d6784694bca0ce578e0b3ccf94cb59330d90d166178b17"} Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.966486 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-combined-ca-bundle\") pod \"d426ed81-18f9-441e-9865-b9a6d683931f\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.966531 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwbq9\" (UniqueName: \"kubernetes.io/projected/d426ed81-18f9-441e-9865-b9a6d683931f-kube-api-access-fwbq9\") pod \"d426ed81-18f9-441e-9865-b9a6d683931f\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.966661 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-scripts\") pod \"d426ed81-18f9-441e-9865-b9a6d683931f\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.966727 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d426ed81-18f9-441e-9865-b9a6d683931f-etc-machine-id\") pod \"d426ed81-18f9-441e-9865-b9a6d683931f\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.966753 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-config-data\") pod \"d426ed81-18f9-441e-9865-b9a6d683931f\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.966910 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-db-sync-config-data\") pod \"d426ed81-18f9-441e-9865-b9a6d683931f\" (UID: \"d426ed81-18f9-441e-9865-b9a6d683931f\") " Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.968012 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d426ed81-18f9-441e-9865-b9a6d683931f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d426ed81-18f9-441e-9865-b9a6d683931f" (UID: "d426ed81-18f9-441e-9865-b9a6d683931f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.973926 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-scripts" (OuterVolumeSpecName: "scripts") pod "d426ed81-18f9-441e-9865-b9a6d683931f" (UID: "d426ed81-18f9-441e-9865-b9a6d683931f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.975627 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d426ed81-18f9-441e-9865-b9a6d683931f-kube-api-access-fwbq9" (OuterVolumeSpecName: "kube-api-access-fwbq9") pod "d426ed81-18f9-441e-9865-b9a6d683931f" (UID: "d426ed81-18f9-441e-9865-b9a6d683931f"). InnerVolumeSpecName "kube-api-access-fwbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.976435 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4gs5w" event={"ID":"d426ed81-18f9-441e-9865-b9a6d683931f","Type":"ContainerDied","Data":"16ccf63dba1b72e27708d2c7e53ebe8a0d06980eb2c85ea18a2cce90e526d58f"} Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.976473 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ccf63dba1b72e27708d2c7e53ebe8a0d06980eb2c85ea18a2cce90e526d58f" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.976542 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4gs5w" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.983985 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d426ed81-18f9-441e-9865-b9a6d683931f" (UID: "d426ed81-18f9-441e-9865-b9a6d683931f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.986396 4681 generic.go:334] "Generic (PLEG): container finished" podID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerID="fd0a88d8aa81cd1911a38df63de2d67edad47e7de3b17fb3653538d94febcd1a" exitCode=137 Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.986421 4681 generic.go:334] "Generic (PLEG): container finished" podID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerID="8b83e6b57ed80aa6780ce5641bbb95a07b50733d3dccf25e7ab868a5610bfc13" exitCode=137 Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.986436 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5444c6b5-7cd6d" event={"ID":"203e0f9e-791d-4b8e-9521-b7b334fcacf6","Type":"ContainerDied","Data":"fd0a88d8aa81cd1911a38df63de2d67edad47e7de3b17fb3653538d94febcd1a"} Nov 23 06:59:55 crc kubenswrapper[4681]: I1123 06:59:55.986450 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5444c6b5-7cd6d" event={"ID":"203e0f9e-791d-4b8e-9521-b7b334fcacf6","Type":"ContainerDied","Data":"8b83e6b57ed80aa6780ce5641bbb95a07b50733d3dccf25e7ab868a5610bfc13"} Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.002092 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d426ed81-18f9-441e-9865-b9a6d683931f" (UID: "d426ed81-18f9-441e-9865-b9a6d683931f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.025473 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-config-data" (OuterVolumeSpecName: "config-data") pod "d426ed81-18f9-441e-9865-b9a6d683931f" (UID: "d426ed81-18f9-441e-9865-b9a6d683931f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.070417 4681 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.070449 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.070473 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwbq9\" (UniqueName: \"kubernetes.io/projected/d426ed81-18f9-441e-9865-b9a6d683931f-kube-api-access-fwbq9\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.070484 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.070494 4681 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d426ed81-18f9-441e-9865-b9a6d683931f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.070502 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d426ed81-18f9-441e-9865-b9a6d683931f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.112812 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.208640 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-759dcb765b-std9h"] Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.209443 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-759dcb765b-std9h" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-api" containerID="cri-o://7cc2ab3f82b6b7f29bfde6f35b40da9fdbb3b525f8acef809a105402bf70e395" gracePeriod=30 Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.209870 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-759dcb765b-std9h" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-httpd" containerID="cri-o://aa66d58e3366f90d416b2c24908e0e060d82706229b4fad9d8e1cd986edae3bf" gracePeriod=30 Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.327689 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.459400 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.479083 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203e0f9e-791d-4b8e-9521-b7b334fcacf6-logs\") pod \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.479233 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203e0f9e-791d-4b8e-9521-b7b334fcacf6-horizon-secret-key\") pod \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.479364 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-config-data\") pod \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.479583 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-scripts\") pod \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.479659 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbv6b\" (UniqueName: \"kubernetes.io/projected/203e0f9e-791d-4b8e-9521-b7b334fcacf6-kube-api-access-dbv6b\") pod \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\" (UID: \"203e0f9e-791d-4b8e-9521-b7b334fcacf6\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.487048 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203e0f9e-791d-4b8e-9521-b7b334fcacf6-logs" (OuterVolumeSpecName: "logs") pod "203e0f9e-791d-4b8e-9521-b7b334fcacf6" (UID: "203e0f9e-791d-4b8e-9521-b7b334fcacf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.512374 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203e0f9e-791d-4b8e-9521-b7b334fcacf6-kube-api-access-dbv6b" (OuterVolumeSpecName: "kube-api-access-dbv6b") pod "203e0f9e-791d-4b8e-9521-b7b334fcacf6" (UID: "203e0f9e-791d-4b8e-9521-b7b334fcacf6"). InnerVolumeSpecName "kube-api-access-dbv6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.533652 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-scripts" (OuterVolumeSpecName: "scripts") pod "203e0f9e-791d-4b8e-9521-b7b334fcacf6" (UID: "203e0f9e-791d-4b8e-9521-b7b334fcacf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.538101 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203e0f9e-791d-4b8e-9521-b7b334fcacf6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "203e0f9e-791d-4b8e-9521-b7b334fcacf6" (UID: "203e0f9e-791d-4b8e-9521-b7b334fcacf6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.541161 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-config-data" (OuterVolumeSpecName: "config-data") pod "203e0f9e-791d-4b8e-9521-b7b334fcacf6" (UID: "203e0f9e-791d-4b8e-9521-b7b334fcacf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.556292 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7fdd9d555f-qrq8m"] Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.581107 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-horizon-secret-key\") pod \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.581180 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-logs\") pod \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.581301 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-config-data\") pod \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.581377 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4gdl\" (UniqueName: \"kubernetes.io/projected/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-kube-api-access-d4gdl\") pod \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.581456 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-scripts\") pod \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\" (UID: \"2f95ab62-e0ad-4566-bbfd-29e2ad374edf\") " Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.582035 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203e0f9e-791d-4b8e-9521-b7b334fcacf6-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.582047 4681 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203e0f9e-791d-4b8e-9521-b7b334fcacf6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.582055 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.582063 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203e0f9e-791d-4b8e-9521-b7b334fcacf6-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.582070 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbv6b\" (UniqueName: \"kubernetes.io/projected/203e0f9e-791d-4b8e-9521-b7b334fcacf6-kube-api-access-dbv6b\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.582732 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-logs" (OuterVolumeSpecName: "logs") pod "2f95ab62-e0ad-4566-bbfd-29e2ad374edf" (UID: "2f95ab62-e0ad-4566-bbfd-29e2ad374edf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.588558 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2f95ab62-e0ad-4566-bbfd-29e2ad374edf" (UID: "2f95ab62-e0ad-4566-bbfd-29e2ad374edf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.607942 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-kube-api-access-d4gdl" (OuterVolumeSpecName: "kube-api-access-d4gdl") pod "2f95ab62-e0ad-4566-bbfd-29e2ad374edf" (UID: "2f95ab62-e0ad-4566-bbfd-29e2ad374edf"). InnerVolumeSpecName "kube-api-access-d4gdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.638006 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-scripts" (OuterVolumeSpecName: "scripts") pod "2f95ab62-e0ad-4566-bbfd-29e2ad374edf" (UID: "2f95ab62-e0ad-4566-bbfd-29e2ad374edf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.659021 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-config-data" (OuterVolumeSpecName: "config-data") pod "2f95ab62-e0ad-4566-bbfd-29e2ad374edf" (UID: "2f95ab62-e0ad-4566-bbfd-29e2ad374edf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.687067 4681 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.687103 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-logs\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.687113 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.687122 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4gdl\" (UniqueName: \"kubernetes.io/projected/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-kube-api-access-d4gdl\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.687131 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f95ab62-e0ad-4566-bbfd-29e2ad374edf-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.848736 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7849bb5f4-b6pjl"] Nov 23 06:59:56 crc kubenswrapper[4681]: W1123 06:59:56.849891 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f1bbb85_4938_4e69_b236_9c7b17a4636f.slice/crio-83264785555ad042c5a8b4b79690e230cb603281e6da004e30516a6f4a8b1971 WatchSource:0}: Error finding container 83264785555ad042c5a8b4b79690e230cb603281e6da004e30516a6f4a8b1971: Status 404 returned error can't find the container with id 83264785555ad042c5a8b4b79690e230cb603281e6da004e30516a6f4a8b1971 Nov 23 06:59:56 crc kubenswrapper[4681]: I1123 06:59:56.929133 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-597c64895-s6nch"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.012361 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" event={"ID":"1f1bbb85-4938-4e69-b236-9c7b17a4636f","Type":"ContainerStarted","Data":"83264785555ad042c5a8b4b79690e230cb603281e6da004e30516a6f4a8b1971"} Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.056683 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerStarted","Data":"a63634c93c0ad1f2f16c98338b64bb42db6d8a79f4cc2ea7ad7f27f4eecebb8a"} Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.056857 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-central-agent" containerID="cri-o://7bf62d391c99d2c553a79853ac349df2afdefe4ce3af717f8c6fe444384be9ec" gracePeriod=30 Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.057101 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.057363 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="proxy-httpd" containerID="cri-o://a63634c93c0ad1f2f16c98338b64bb42db6d8a79f4cc2ea7ad7f27f4eecebb8a" gracePeriod=30 Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.057417 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="sg-core" containerID="cri-o://0ff583c9c29e694a7b38e9392f3f10523c52cac8769bcde58a4b7e50ffde47c2" gracePeriod=30 Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.057472 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-notification-agent" containerID="cri-o://e33218e2cdaab185b40249e2d9e91fa0508971e75bd3af0e4e4904a08838eb75" gracePeriod=30 Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.075617 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b7dd84c8b-57zgx"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.086128 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845ccd5479-79qz5" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.086653 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845ccd5479-79qz5" event={"ID":"2f95ab62-e0ad-4566-bbfd-29e2ad374edf","Type":"ContainerDied","Data":"5cb4a2c6f0570027854057f2b06d2920ca8e624ffa1718599aff5111048ed630"} Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.086805 4681 scope.go:117] "RemoveContainer" containerID="da1db64be7e782ae46e4c5e141005925fec41eddf0809a920f923eafae375c41" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.112678 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5444c6b5-7cd6d" event={"ID":"203e0f9e-791d-4b8e-9521-b7b334fcacf6","Type":"ContainerDied","Data":"fcc8fe4e140585eedac6743672fc9be32f8092fbbeb64d793e13d4db5da135e8"} Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.112800 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5444c6b5-7cd6d" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.128149 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bb6dddd54-bttkq"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.130166 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597c64895-s6nch" event={"ID":"23e6ab25-e753-4758-a79d-f89855309d8d","Type":"ContainerStarted","Data":"fe3426299f914021876f2608e6e71034126b96bff0ea9f200ea76ceac722f940"} Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.135123 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.200061556 podStartE2EDuration="1m8.135109201s" podCreationTimestamp="2025-11-23 06:58:49 +0000 UTC" firstStartedPulling="2025-11-23 06:58:50.989991392 +0000 UTC m=+868.059500629" lastFinishedPulling="2025-11-23 06:59:55.925039037 +0000 UTC m=+932.994548274" observedRunningTime="2025-11-23 06:59:57.096863531 +0000 UTC m=+934.166372768" watchObservedRunningTime="2025-11-23 06:59:57.135109201 +0000 UTC m=+934.204618438" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.154759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fdd9d555f-qrq8m" event={"ID":"c1a71cfb-fb6a-458b-875d-7beebe8dc444","Type":"ContainerStarted","Data":"305c4ba1e6959fb3889f42bcdbcdca61314bda915e74d11f735f02360b191114"} Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.164937 4681 generic.go:334] "Generic (PLEG): container finished" podID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerID="aa66d58e3366f90d416b2c24908e0e060d82706229b4fad9d8e1cd986edae3bf" exitCode=0 Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.164968 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759dcb765b-std9h" event={"ID":"abe896c0-87f4-4c4c-b23a-81a10a557aed","Type":"ContainerDied","Data":"aa66d58e3366f90d416b2c24908e0e060d82706229b4fad9d8e1cd986edae3bf"} Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239084 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 06:59:57 crc kubenswrapper[4681]: E1123 06:59:57.239560 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239577 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon" Nov 23 06:59:57 crc kubenswrapper[4681]: E1123 06:59:57.239594 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon-log" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239606 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon-log" Nov 23 06:59:57 crc kubenswrapper[4681]: E1123 06:59:57.239642 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239648 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon" Nov 23 06:59:57 crc kubenswrapper[4681]: E1123 06:59:57.239663 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d426ed81-18f9-441e-9865-b9a6d683931f" containerName="cinder-db-sync" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239668 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d426ed81-18f9-441e-9865-b9a6d683931f" containerName="cinder-db-sync" Nov 23 06:59:57 crc kubenswrapper[4681]: E1123 06:59:57.239676 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon-log" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239681 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon-log" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239870 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon-log" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239884 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239900 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d426ed81-18f9-441e-9865-b9a6d683931f" containerName="cinder-db-sync" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239912 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" containerName="horizon-log" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.239928 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" containerName="horizon" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.240947 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.244296 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.265086 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.265382 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sp47" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.265522 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303448 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303502 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-845ccd5479-79qz5"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303514 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-845ccd5479-79qz5"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303663 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303718 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303761 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtd8\" (UniqueName: \"kubernetes.io/projected/d8405966-0c4a-42eb-bed4-6f6ae19bff63-kube-api-access-nqtd8\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303798 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303824 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.303859 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8405966-0c4a-42eb-bed4-6f6ae19bff63-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.317502 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5444c6b5-7cd6d"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.327286 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c5444c6b5-7cd6d"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.438035 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtd8\" (UniqueName: \"kubernetes.io/projected/d8405966-0c4a-42eb-bed4-6f6ae19bff63-kube-api-access-nqtd8\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.438221 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.438292 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.438546 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8405966-0c4a-42eb-bed4-6f6ae19bff63-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.438838 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.438922 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.451057 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8405966-0c4a-42eb-bed4-6f6ae19bff63-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.468266 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-597c64895-s6nch"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.469618 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.498162 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.498374 4681 scope.go:117] "RemoveContainer" containerID="405fcd023d71625841d6784694bca0ce578e0b3ccf94cb59330d90d166178b17" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.500816 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.501200 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.516427 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtd8\" (UniqueName: \"kubernetes.io/projected/d8405966-0c4a-42eb-bed4-6f6ae19bff63-kube-api-access-nqtd8\") pod \"cinder-scheduler-0\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.526633 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64cc7f6975-jn6mr"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.531769 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.581716 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64cc7f6975-jn6mr"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.595153 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.613453 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.615232 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.626946 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.648350 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-svc\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.648408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-config\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.648577 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-sb\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.648634 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdsf\" (UniqueName: \"kubernetes.io/projected/529f52d4-35e7-4121-899e-0e94d628f72c-kube-api-access-rfdsf\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.648782 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-nb\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.648847 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-swift-storage-0\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.670042 4681 scope.go:117] "RemoveContainer" containerID="fd0a88d8aa81cd1911a38df63de2d67edad47e7de3b17fb3653538d94febcd1a" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.726875 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.750835 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-logs\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.750890 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.750936 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.750964 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-svc\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.750990 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-config\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751023 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-sb\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751044 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751127 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdsf\" (UniqueName: \"kubernetes.io/projected/529f52d4-35e7-4121-899e-0e94d628f72c-kube-api-access-rfdsf\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751270 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-nb\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751313 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751355 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-swift-storage-0\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751374 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kc8r\" (UniqueName: \"kubernetes.io/projected/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-kube-api-access-2kc8r\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751497 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-scripts\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.751924 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-svc\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.752420 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-nb\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.753932 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-config\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.754068 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-swift-storage-0\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.762042 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-sb\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.773515 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdsf\" (UniqueName: \"kubernetes.io/projected/529f52d4-35e7-4121-899e-0e94d628f72c-kube-api-access-rfdsf\") pod \"dnsmasq-dns-64cc7f6975-jn6mr\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.853838 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.853918 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.853959 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kc8r\" (UniqueName: \"kubernetes.io/projected/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-kube-api-access-2kc8r\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.854003 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-scripts\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.854043 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-logs\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.854090 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.854136 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.855682 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.856414 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-logs\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.865953 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-scripts\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.865963 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.873599 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.876122 4681 scope.go:117] "RemoveContainer" containerID="8b83e6b57ed80aa6780ce5641bbb95a07b50733d3dccf25e7ab868a5610bfc13" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.876587 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.922002 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kc8r\" (UniqueName: \"kubernetes.io/projected/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-kube-api-access-2kc8r\") pod \"cinder-api-0\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " pod="openstack/cinder-api-0" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.929221 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 06:59:57 crc kubenswrapper[4681]: I1123 06:59:57.963223 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.242414 4681 generic.go:334] "Generic (PLEG): container finished" podID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerID="a63634c93c0ad1f2f16c98338b64bb42db6d8a79f4cc2ea7ad7f27f4eecebb8a" exitCode=0 Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.242507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerDied","Data":"a63634c93c0ad1f2f16c98338b64bb42db6d8a79f4cc2ea7ad7f27f4eecebb8a"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.246987 4681 generic.go:334] "Generic (PLEG): container finished" podID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerID="0ff583c9c29e694a7b38e9392f3f10523c52cac8769bcde58a4b7e50ffde47c2" exitCode=2 Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.247038 4681 generic.go:334] "Generic (PLEG): container finished" podID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerID="7bf62d391c99d2c553a79853ac349df2afdefe4ce3af717f8c6fe444384be9ec" exitCode=0 Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.247054 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerDied","Data":"0ff583c9c29e694a7b38e9392f3f10523c52cac8769bcde58a4b7e50ffde47c2"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.247143 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerDied","Data":"7bf62d391c99d2c553a79853ac349df2afdefe4ce3af717f8c6fe444384be9ec"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.275162 4681 generic.go:334] "Generic (PLEG): container finished" podID="23e6ab25-e753-4758-a79d-f89855309d8d" containerID="92a1c873578f60bf08fa328e4d14ef952a3e03bcfcf2ae675833bc7a0ba03797" exitCode=0 Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.275211 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597c64895-s6nch" event={"ID":"23e6ab25-e753-4758-a79d-f89855309d8d","Type":"ContainerDied","Data":"92a1c873578f60bf08fa328e4d14ef952a3e03bcfcf2ae675833bc7a0ba03797"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.284569 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7dd84c8b-57zgx" event={"ID":"69085e5b-69b6-421a-aaa2-066bb27620d1","Type":"ContainerStarted","Data":"cd9f4003745c4660526017ba2b0bbccbdfea15e35635e9d26d3040c33be614f7"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.284606 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7dd84c8b-57zgx" event={"ID":"69085e5b-69b6-421a-aaa2-066bb27620d1","Type":"ContainerStarted","Data":"4a5fd6c13b30d7ae71c09804b99c608e7d99ba6e1bb6297b100101dd5055af50"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.287906 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb6dddd54-bttkq" event={"ID":"97338e7f-0f80-4f47-905f-59df8aef837b","Type":"ContainerStarted","Data":"3633fb223f3e68fe2602637082e2017e270a3ef6347d5aee46024a89ba1c39db"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.287959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb6dddd54-bttkq" event={"ID":"97338e7f-0f80-4f47-905f-59df8aef837b","Type":"ContainerStarted","Data":"3ff02afb8eae34792434ff80f493a342af21818e36b1fa8b0a85c80d7936bfe9"} Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.331046 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.515439 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64cc7f6975-jn6mr"] Nov 23 06:59:58 crc kubenswrapper[4681]: I1123 06:59:58.745037 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 06:59:58 crc kubenswrapper[4681]: E1123 06:59:58.750294 4681 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 23 06:59:58 crc kubenswrapper[4681]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/23e6ab25-e753-4758-a79d-f89855309d8d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 06:59:58 crc kubenswrapper[4681]: > podSandboxID="fe3426299f914021876f2608e6e71034126b96bff0ea9f200ea76ceac722f940" Nov 23 06:59:58 crc kubenswrapper[4681]: E1123 06:59:58.750489 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 23 06:59:58 crc kubenswrapper[4681]: container &Container{Name:dnsmasq-dns,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch57ch5c5hcch589hf7h577h659h96h5c8h5b4h55fhbbh667h565h5bchcbh58dh7dh5bch586h56ch574h598h67dh5c8h56dh8bh574h564hbch7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqp98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-597c64895-s6nch_openstack(23e6ab25-e753-4758-a79d-f89855309d8d): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/23e6ab25-e753-4758-a79d-f89855309d8d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 06:59:58 crc kubenswrapper[4681]: > logger="UnhandledError" Nov 23 06:59:58 crc kubenswrapper[4681]: E1123 06:59:58.752196 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/23e6ab25-e753-4758-a79d-f89855309d8d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-597c64895-s6nch" podUID="23e6ab25-e753-4758-a79d-f89855309d8d" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.103607 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fcdb4576d-g8stp" podUID="bdfa433c-2b77-4373-877f-5c92a2b39fb8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.103685 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.104170 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f940cdcb178170ebf29c7591f70bc1b658fd92fed2c294459eb2f16f26d69ceb"} pod="openstack/horizon-fcdb4576d-g8stp" containerMessage="Container horizon failed startup probe, will be restarted" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.104203 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fcdb4576d-g8stp" podUID="bdfa433c-2b77-4373-877f-5c92a2b39fb8" containerName="horizon" containerID="cri-o://f940cdcb178170ebf29c7591f70bc1b658fd92fed2c294459eb2f16f26d69ceb" gracePeriod=30 Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.297548 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="203e0f9e-791d-4b8e-9521-b7b334fcacf6" path="/var/lib/kubelet/pods/203e0f9e-791d-4b8e-9521-b7b334fcacf6/volumes" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.298199 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f95ab62-e0ad-4566-bbfd-29e2ad374edf" path="/var/lib/kubelet/pods/2f95ab62-e0ad-4566-bbfd-29e2ad374edf/volumes" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.314814 4681 generic.go:334] "Generic (PLEG): container finished" podID="529f52d4-35e7-4121-899e-0e94d628f72c" containerID="b93055c7a819e65f485bc23a27ee6221af485e31686dca1307fc79f19046ed48" exitCode=0 Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.314876 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" event={"ID":"529f52d4-35e7-4121-899e-0e94d628f72c","Type":"ContainerDied","Data":"b93055c7a819e65f485bc23a27ee6221af485e31686dca1307fc79f19046ed48"} Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.314901 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" event={"ID":"529f52d4-35e7-4121-899e-0e94d628f72c","Type":"ContainerStarted","Data":"8dd6d6bdad85e78075df5cf7f32b878bc7c52aa93c714e6f50a549567c918a99"} Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.320954 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8405966-0c4a-42eb-bed4-6f6ae19bff63","Type":"ContainerStarted","Data":"0150ea11abf5cebed2f2443922f278ad975507f1038835613a985212f6c257b5"} Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.333591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8","Type":"ContainerStarted","Data":"7f274ff9df459f3e11a66824db9b7ef955cfd3d5e6944531d432b95f4726c090"} Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.341246 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7dd84c8b-57zgx" event={"ID":"69085e5b-69b6-421a-aaa2-066bb27620d1","Type":"ContainerStarted","Data":"2730fa0e24f8c16c99a0ee89215fb0fe9ea622bb56094f23487939103a76e498"} Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.341411 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.341470 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.347992 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb6dddd54-bttkq" event={"ID":"97338e7f-0f80-4f47-905f-59df8aef837b","Type":"ContainerStarted","Data":"686d1e2adbe17aaea30b12fb22e427bff70017300747d7a15e4b5854e1e362bd"} Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.386327 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b7dd84c8b-57zgx" podStartSLOduration=7.386310435 podStartE2EDuration="7.386310435s" podCreationTimestamp="2025-11-23 06:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:59.377156569 +0000 UTC m=+936.446665796" watchObservedRunningTime="2025-11-23 06:59:59.386310435 +0000 UTC m=+936.455819672" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.443718 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bb6dddd54-bttkq" podStartSLOduration=10.443694825 podStartE2EDuration="10.443694825s" podCreationTimestamp="2025-11-23 06:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:59:59.410895373 +0000 UTC m=+936.480404611" watchObservedRunningTime="2025-11-23 06:59:59.443694825 +0000 UTC m=+936.513204062" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.901768 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 06:59:59 crc kubenswrapper[4681]: I1123 06:59:59.902029 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.132530 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4"] Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.133616 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.136446 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.136709 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.152138 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4"] Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.224943 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kxl\" (UniqueName: \"kubernetes.io/projected/c443c21f-e6ff-4f01-a598-554f97be2872-kube-api-access-g5kxl\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.225244 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c443c21f-e6ff-4f01-a598-554f97be2872-secret-volume\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.225329 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c443c21f-e6ff-4f01-a598-554f97be2872-config-volume\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.296320 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.340174 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c443c21f-e6ff-4f01-a598-554f97be2872-config-volume\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.341122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kxl\" (UniqueName: \"kubernetes.io/projected/c443c21f-e6ff-4f01-a598-554f97be2872-kube-api-access-g5kxl\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.341476 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c443c21f-e6ff-4f01-a598-554f97be2872-config-volume\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.342731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c443c21f-e6ff-4f01-a598-554f97be2872-secret-volume\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.356137 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c443c21f-e6ff-4f01-a598-554f97be2872-secret-volume\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.359133 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kxl\" (UniqueName: \"kubernetes.io/projected/c443c21f-e6ff-4f01-a598-554f97be2872-kube-api-access-g5kxl\") pod \"collect-profiles-29398020-n9kl4\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.406678 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597c64895-s6nch" event={"ID":"23e6ab25-e753-4758-a79d-f89855309d8d","Type":"ContainerDied","Data":"fe3426299f914021876f2608e6e71034126b96bff0ea9f200ea76ceac722f940"} Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.406831 4681 scope.go:117] "RemoveContainer" containerID="92a1c873578f60bf08fa328e4d14ef952a3e03bcfcf2ae675833bc7a0ba03797" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.406996 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597c64895-s6nch" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.427694 4681 generic.go:334] "Generic (PLEG): container finished" podID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerID="e33218e2cdaab185b40249e2d9e91fa0508971e75bd3af0e4e4904a08838eb75" exitCode=0 Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.427790 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerDied","Data":"e33218e2cdaab185b40249e2d9e91fa0508971e75bd3af0e4e4904a08838eb75"} Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.430429 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8","Type":"ContainerStarted","Data":"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d"} Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.444952 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-svc\") pod \"23e6ab25-e753-4758-a79d-f89855309d8d\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.445027 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-config\") pod \"23e6ab25-e753-4758-a79d-f89855309d8d\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.445117 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-nb\") pod \"23e6ab25-e753-4758-a79d-f89855309d8d\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.445182 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-swift-storage-0\") pod \"23e6ab25-e753-4758-a79d-f89855309d8d\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.445219 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqp98\" (UniqueName: \"kubernetes.io/projected/23e6ab25-e753-4758-a79d-f89855309d8d-kube-api-access-fqp98\") pod \"23e6ab25-e753-4758-a79d-f89855309d8d\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.445248 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-sb\") pod \"23e6ab25-e753-4758-a79d-f89855309d8d\" (UID: \"23e6ab25-e753-4758-a79d-f89855309d8d\") " Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.453980 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e6ab25-e753-4758-a79d-f89855309d8d-kube-api-access-fqp98" (OuterVolumeSpecName: "kube-api-access-fqp98") pod "23e6ab25-e753-4758-a79d-f89855309d8d" (UID: "23e6ab25-e753-4758-a79d-f89855309d8d"). InnerVolumeSpecName "kube-api-access-fqp98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.494032 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-config" (OuterVolumeSpecName: "config") pod "23e6ab25-e753-4758-a79d-f89855309d8d" (UID: "23e6ab25-e753-4758-a79d-f89855309d8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.500677 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23e6ab25-e753-4758-a79d-f89855309d8d" (UID: "23e6ab25-e753-4758-a79d-f89855309d8d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.506157 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23e6ab25-e753-4758-a79d-f89855309d8d" (UID: "23e6ab25-e753-4758-a79d-f89855309d8d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.508551 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23e6ab25-e753-4758-a79d-f89855309d8d" (UID: "23e6ab25-e753-4758-a79d-f89855309d8d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.526719 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23e6ab25-e753-4758-a79d-f89855309d8d" (UID: "23e6ab25-e753-4758-a79d-f89855309d8d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.530256 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.548514 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.548545 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.548555 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.548567 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.548577 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqp98\" (UniqueName: \"kubernetes.io/projected/23e6ab25-e753-4758-a79d-f89855309d8d-kube-api-access-fqp98\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.548587 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e6ab25-e753-4758-a79d-f89855309d8d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.591063 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.777105 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-597c64895-s6nch"] Nov 23 07:00:00 crc kubenswrapper[4681]: I1123 07:00:00.782237 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-597c64895-s6nch"] Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.100718 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.165122 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-config-data\") pod \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.165434 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-log-httpd\") pod \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.165595 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-scripts\") pod \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.165667 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-run-httpd\") pod \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.165686 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-combined-ca-bundle\") pod \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.165716 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-sg-core-conf-yaml\") pod \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.165766 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2gxn\" (UniqueName: \"kubernetes.io/projected/2483649a-baa7-4c82-92d5-b3e2aff97ab2-kube-api-access-m2gxn\") pod \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\" (UID: \"2483649a-baa7-4c82-92d5-b3e2aff97ab2\") " Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.174392 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-scripts" (OuterVolumeSpecName: "scripts") pod "2483649a-baa7-4c82-92d5-b3e2aff97ab2" (UID: "2483649a-baa7-4c82-92d5-b3e2aff97ab2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.183998 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2483649a-baa7-4c82-92d5-b3e2aff97ab2-kube-api-access-m2gxn" (OuterVolumeSpecName: "kube-api-access-m2gxn") pod "2483649a-baa7-4c82-92d5-b3e2aff97ab2" (UID: "2483649a-baa7-4c82-92d5-b3e2aff97ab2"). InnerVolumeSpecName "kube-api-access-m2gxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.188694 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2483649a-baa7-4c82-92d5-b3e2aff97ab2" (UID: "2483649a-baa7-4c82-92d5-b3e2aff97ab2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.198187 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2483649a-baa7-4c82-92d5-b3e2aff97ab2" (UID: "2483649a-baa7-4c82-92d5-b3e2aff97ab2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.241115 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2483649a-baa7-4c82-92d5-b3e2aff97ab2" (UID: "2483649a-baa7-4c82-92d5-b3e2aff97ab2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.280033 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.280059 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.280069 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2483649a-baa7-4c82-92d5-b3e2aff97ab2-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.280081 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.280091 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2gxn\" (UniqueName: \"kubernetes.io/projected/2483649a-baa7-4c82-92d5-b3e2aff97ab2-kube-api-access-m2gxn\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.343734 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e6ab25-e753-4758-a79d-f89855309d8d" path="/var/lib/kubelet/pods/23e6ab25-e753-4758-a79d-f89855309d8d/volumes" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.376024 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2483649a-baa7-4c82-92d5-b3e2aff97ab2" (UID: "2483649a-baa7-4c82-92d5-b3e2aff97ab2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.385266 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4"] Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.385943 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.396783 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-config-data" (OuterVolumeSpecName: "config-data") pod "2483649a-baa7-4c82-92d5-b3e2aff97ab2" (UID: "2483649a-baa7-4c82-92d5-b3e2aff97ab2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.487685 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2483649a-baa7-4c82-92d5-b3e2aff97ab2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.491624 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2483649a-baa7-4c82-92d5-b3e2aff97ab2","Type":"ContainerDied","Data":"32f56c1417d6210127e2cf39c10f743cc7ab6427cd933fde74c872b1db3e1ae0"} Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.491675 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.491696 4681 scope.go:117] "RemoveContainer" containerID="a63634c93c0ad1f2f16c98338b64bb42db6d8a79f4cc2ea7ad7f27f4eecebb8a" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.499697 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" event={"ID":"c443c21f-e6ff-4f01-a598-554f97be2872","Type":"ContainerStarted","Data":"5c7d2c898f67cd22313ca3ea16702a7ab2fe3ad5eac75647b2290d6df66a71d2"} Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.510640 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" event={"ID":"529f52d4-35e7-4121-899e-0e94d628f72c","Type":"ContainerStarted","Data":"71e11d33c28ac955113b26e7578c58e25494627cc10819703a1770c3464f7273"} Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.510690 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.529006 4681 scope.go:117] "RemoveContainer" containerID="0ff583c9c29e694a7b38e9392f3f10523c52cac8769bcde58a4b7e50ffde47c2" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.536063 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" podStartSLOduration=4.53604438 podStartE2EDuration="4.53604438s" podCreationTimestamp="2025-11-23 06:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:01.529174084 +0000 UTC m=+938.598683312" watchObservedRunningTime="2025-11-23 07:00:01.53604438 +0000 UTC m=+938.605553616" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.565721 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.620695 4681 scope.go:117] "RemoveContainer" containerID="e33218e2cdaab185b40249e2d9e91fa0508971e75bd3af0e4e4904a08838eb75" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.623045 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.655285 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:01 crc kubenswrapper[4681]: E1123 07:00:01.655936 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-notification-agent" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.655960 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-notification-agent" Nov 23 07:00:01 crc kubenswrapper[4681]: E1123 07:00:01.655980 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="sg-core" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.655989 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="sg-core" Nov 23 07:00:01 crc kubenswrapper[4681]: E1123 07:00:01.656011 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="proxy-httpd" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656020 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="proxy-httpd" Nov 23 07:00:01 crc kubenswrapper[4681]: E1123 07:00:01.656036 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-central-agent" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656044 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-central-agent" Nov 23 07:00:01 crc kubenswrapper[4681]: E1123 07:00:01.656054 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e6ab25-e753-4758-a79d-f89855309d8d" containerName="init" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656208 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e6ab25-e753-4758-a79d-f89855309d8d" containerName="init" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656452 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e6ab25-e753-4758-a79d-f89855309d8d" containerName="init" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656499 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-notification-agent" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656528 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="sg-core" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656535 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="proxy-httpd" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.656543 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" containerName="ceilometer-central-agent" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.663332 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.666736 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.667192 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.699444 4681 scope.go:117] "RemoveContainer" containerID="7bf62d391c99d2c553a79853ac349df2afdefe4ce3af717f8c6fe444384be9ec" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.707282 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.797580 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxstl\" (UniqueName: \"kubernetes.io/projected/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-kube-api-access-xxstl\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.797677 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-config-data\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.797724 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-scripts\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.797766 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.797791 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-run-httpd\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.798179 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.798233 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-log-httpd\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.899706 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxstl\" (UniqueName: \"kubernetes.io/projected/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-kube-api-access-xxstl\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.899969 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-config-data\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.900002 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-scripts\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.900050 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.900072 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-run-httpd\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.900089 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.900122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-log-httpd\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.900590 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-log-httpd\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.901030 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-run-httpd\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.914719 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.919270 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-config-data\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.926978 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxstl\" (UniqueName: \"kubernetes.io/projected/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-kube-api-access-xxstl\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.928972 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-scripts\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:01 crc kubenswrapper[4681]: I1123 07:00:01.929429 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " pod="openstack/ceilometer-0" Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.005389 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.670003 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:02 crc kubenswrapper[4681]: W1123 07:00:02.710864 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3267d91e_9fae_46d3_ba7e_4d06fbe83e00.slice/crio-c4ffe6862f7bf4a72d0c51e38c9e9167cf62ae7abeaa3a2a5d528b128bbebb1b WatchSource:0}: Error finding container c4ffe6862f7bf4a72d0c51e38c9e9167cf62ae7abeaa3a2a5d528b128bbebb1b: Status 404 returned error can't find the container with id c4ffe6862f7bf4a72d0c51e38c9e9167cf62ae7abeaa3a2a5d528b128bbebb1b Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.710995 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" event={"ID":"1f1bbb85-4938-4e69-b236-9c7b17a4636f","Type":"ContainerStarted","Data":"744f9f91a9a3d6d37f3270910f39d6ae2c65e60e9b5df1d69fbba183906cd018"} Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.711033 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" event={"ID":"1f1bbb85-4938-4e69-b236-9c7b17a4636f","Type":"ContainerStarted","Data":"fee288326174688acfc704d0a134fa3c92cf62b8447412c0dbcb1207ba50783e"} Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.732881 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8405966-0c4a-42eb-bed4-6f6ae19bff63","Type":"ContainerStarted","Data":"a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0"} Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.736926 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7849bb5f4-b6pjl" podStartSLOduration=9.575865074 podStartE2EDuration="13.736909532s" podCreationTimestamp="2025-11-23 06:59:49 +0000 UTC" firstStartedPulling="2025-11-23 06:59:56.862060251 +0000 UTC m=+933.931569487" lastFinishedPulling="2025-11-23 07:00:01.023104719 +0000 UTC m=+938.092613945" observedRunningTime="2025-11-23 07:00:02.733282376 +0000 UTC m=+939.802791613" watchObservedRunningTime="2025-11-23 07:00:02.736909532 +0000 UTC m=+939.806418769" Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.759682 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8","Type":"ContainerStarted","Data":"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95"} Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.759834 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api-log" containerID="cri-o://78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d" gracePeriod=30 Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.760077 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.760319 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api" containerID="cri-o://b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95" gracePeriod=30 Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.788750 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.788737366 podStartE2EDuration="5.788737366s" podCreationTimestamp="2025-11-23 06:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:02.783523055 +0000 UTC m=+939.853032292" watchObservedRunningTime="2025-11-23 07:00:02.788737366 +0000 UTC m=+939.858246603" Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.798369 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fdd9d555f-qrq8m" event={"ID":"c1a71cfb-fb6a-458b-875d-7beebe8dc444","Type":"ContainerStarted","Data":"cc4ef46c91892f4ee2cc3bea63d183f606d16885090937b806422c57bb9e9b80"} Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.798426 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fdd9d555f-qrq8m" event={"ID":"c1a71cfb-fb6a-458b-875d-7beebe8dc444","Type":"ContainerStarted","Data":"841f9f05fd2aa65f04aca54d6850c1e65fd8b9d108d1e19a9a35c5bf46eb42af"} Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.814156 4681 generic.go:334] "Generic (PLEG): container finished" podID="c443c21f-e6ff-4f01-a598-554f97be2872" containerID="ea41bb3d88498c282ec3c619aaa6fbef303da1dcf6b84c6d6580fd27ce9b132d" exitCode=0 Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.814839 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" event={"ID":"c443c21f-e6ff-4f01-a598-554f97be2872","Type":"ContainerDied","Data":"ea41bb3d88498c282ec3c619aaa6fbef303da1dcf6b84c6d6580fd27ce9b132d"} Nov 23 07:00:02 crc kubenswrapper[4681]: I1123 07:00:02.837567 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7fdd9d555f-qrq8m" podStartSLOduration=9.633854638 podStartE2EDuration="13.837550309s" podCreationTimestamp="2025-11-23 06:59:49 +0000 UTC" firstStartedPulling="2025-11-23 06:59:56.564703114 +0000 UTC m=+933.634212352" lastFinishedPulling="2025-11-23 07:00:00.768398796 +0000 UTC m=+937.837908023" observedRunningTime="2025-11-23 07:00:02.813578428 +0000 UTC m=+939.883087655" watchObservedRunningTime="2025-11-23 07:00:02.837550309 +0000 UTC m=+939.907059546" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.036258 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.291724 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2483649a-baa7-4c82-92d5-b3e2aff97ab2" path="/var/lib/kubelet/pods/2483649a-baa7-4c82-92d5-b3e2aff97ab2/volumes" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.459362 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.586774 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-scripts\") pod \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.587239 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data-custom\") pod \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.587284 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-combined-ca-bundle\") pod \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.587496 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-logs\") pod \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.587544 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kc8r\" (UniqueName: \"kubernetes.io/projected/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-kube-api-access-2kc8r\") pod \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.587586 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-etc-machine-id\") pod \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.587639 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data\") pod \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\" (UID: \"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8\") " Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.593622 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" (UID: "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.596851 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-logs" (OuterVolumeSpecName: "logs") pod "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" (UID: "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.611739 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-kube-api-access-2kc8r" (OuterVolumeSpecName: "kube-api-access-2kc8r") pod "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" (UID: "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8"). InnerVolumeSpecName "kube-api-access-2kc8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.615995 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-scripts" (OuterVolumeSpecName: "scripts") pod "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" (UID: "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.619946 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" (UID: "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.638963 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" (UID: "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.673638 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data" (OuterVolumeSpecName: "config-data") pod "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" (UID: "7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.690089 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.690118 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.690128 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.690136 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.690144 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kc8r\" (UniqueName: \"kubernetes.io/projected/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-kube-api-access-2kc8r\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.690154 4681 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.690162 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.826108 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerStarted","Data":"c4ffe6862f7bf4a72d0c51e38c9e9167cf62ae7abeaa3a2a5d528b128bbebb1b"} Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.827798 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8405966-0c4a-42eb-bed4-6f6ae19bff63","Type":"ContainerStarted","Data":"80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45"} Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.833175 4681 generic.go:334] "Generic (PLEG): container finished" podID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerID="b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95" exitCode=0 Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.833301 4681 generic.go:334] "Generic (PLEG): container finished" podID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerID="78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d" exitCode=143 Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.833660 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.833813 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8","Type":"ContainerDied","Data":"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95"} Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.833888 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8","Type":"ContainerDied","Data":"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d"} Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.833904 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8","Type":"ContainerDied","Data":"7f274ff9df459f3e11a66824db9b7ef955cfd3d5e6944531d432b95f4726c090"} Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.833924 4681 scope.go:117] "RemoveContainer" containerID="b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.846153 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.862875189 podStartE2EDuration="6.846142899s" podCreationTimestamp="2025-11-23 06:59:57 +0000 UTC" firstStartedPulling="2025-11-23 06:59:58.376560992 +0000 UTC m=+935.446070229" lastFinishedPulling="2025-11-23 07:00:01.359828702 +0000 UTC m=+938.429337939" observedRunningTime="2025-11-23 07:00:03.845167287 +0000 UTC m=+940.914676515" watchObservedRunningTime="2025-11-23 07:00:03.846142899 +0000 UTC m=+940.915652136" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.882100 4681 scope.go:117] "RemoveContainer" containerID="78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.898697 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.911939 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.917516 4681 scope.go:117] "RemoveContainer" containerID="b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95" Nov 23 07:00:03 crc kubenswrapper[4681]: E1123 07:00:03.929132 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95\": container with ID starting with b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95 not found: ID does not exist" containerID="b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.929170 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95"} err="failed to get container status \"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95\": rpc error: code = NotFound desc = could not find container \"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95\": container with ID starting with b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95 not found: ID does not exist" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.929196 4681 scope.go:117] "RemoveContainer" containerID="78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.931680 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:00:03 crc kubenswrapper[4681]: E1123 07:00:03.932141 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api-log" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.932159 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api-log" Nov 23 07:00:03 crc kubenswrapper[4681]: E1123 07:00:03.932173 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.932180 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.932363 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.932385 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" containerName="cinder-api-log" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.933302 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: E1123 07:00:03.933691 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d\": container with ID starting with 78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d not found: ID does not exist" containerID="78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.933728 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d"} err="failed to get container status \"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d\": rpc error: code = NotFound desc = could not find container \"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d\": container with ID starting with 78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d not found: ID does not exist" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.933752 4681 scope.go:117] "RemoveContainer" containerID="b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.934384 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95"} err="failed to get container status \"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95\": rpc error: code = NotFound desc = could not find container \"b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95\": container with ID starting with b0782da24ddf077b01e0b01d5272b837490f27415bd0e9adb6c73373f31b7a95 not found: ID does not exist" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.934420 4681 scope.go:117] "RemoveContainer" containerID="78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.937694 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d"} err="failed to get container status \"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d\": rpc error: code = NotFound desc = could not find container \"78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d\": container with ID starting with 78b6577554c52da03eeedb546e474d0478caa8ebec7ae60b7649f901ac7d541d not found: ID does not exist" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.940080 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.940154 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.940319 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.946298 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996070 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-config-data-custom\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996264 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996295 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f293f80c-7ede-49a7-88d0-c6e41833a75a-logs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996336 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-868vq\" (UniqueName: \"kubernetes.io/projected/f293f80c-7ede-49a7-88d0-c6e41833a75a-kube-api-access-868vq\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996378 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-config-data\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996485 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996650 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f293f80c-7ede-49a7-88d0-c6e41833a75a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:03 crc kubenswrapper[4681]: I1123 07:00:03.996762 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-scripts\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.101513 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f293f80c-7ede-49a7-88d0-c6e41833a75a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.101719 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f293f80c-7ede-49a7-88d0-c6e41833a75a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.101914 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-scripts\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.102026 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-config-data-custom\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.102410 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.102477 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f293f80c-7ede-49a7-88d0-c6e41833a75a-logs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.102545 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-868vq\" (UniqueName: \"kubernetes.io/projected/f293f80c-7ede-49a7-88d0-c6e41833a75a-kube-api-access-868vq\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.102608 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.102740 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-config-data\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.102778 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.103260 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f293f80c-7ede-49a7-88d0-c6e41833a75a-logs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.111315 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-scripts\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.116949 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.120996 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-config-data-custom\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.125475 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-868vq\" (UniqueName: \"kubernetes.io/projected/f293f80c-7ede-49a7-88d0-c6e41833a75a-kube-api-access-868vq\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.144581 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-config-data\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.150909 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.200197 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293f80c-7ede-49a7-88d0-c6e41833a75a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f293f80c-7ede-49a7-88d0-c6e41833a75a\") " pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.273695 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.317765 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.520146 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c443c21f-e6ff-4f01-a598-554f97be2872-config-volume\") pod \"c443c21f-e6ff-4f01-a598-554f97be2872\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.520608 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c443c21f-e6ff-4f01-a598-554f97be2872-secret-volume\") pod \"c443c21f-e6ff-4f01-a598-554f97be2872\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.520727 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5kxl\" (UniqueName: \"kubernetes.io/projected/c443c21f-e6ff-4f01-a598-554f97be2872-kube-api-access-g5kxl\") pod \"c443c21f-e6ff-4f01-a598-554f97be2872\" (UID: \"c443c21f-e6ff-4f01-a598-554f97be2872\") " Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.521087 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c443c21f-e6ff-4f01-a598-554f97be2872-config-volume" (OuterVolumeSpecName: "config-volume") pod "c443c21f-e6ff-4f01-a598-554f97be2872" (UID: "c443c21f-e6ff-4f01-a598-554f97be2872"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.532020 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c443c21f-e6ff-4f01-a598-554f97be2872-kube-api-access-g5kxl" (OuterVolumeSpecName: "kube-api-access-g5kxl") pod "c443c21f-e6ff-4f01-a598-554f97be2872" (UID: "c443c21f-e6ff-4f01-a598-554f97be2872"). InnerVolumeSpecName "kube-api-access-g5kxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.533578 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c443c21f-e6ff-4f01-a598-554f97be2872-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c443c21f-e6ff-4f01-a598-554f97be2872" (UID: "c443c21f-e6ff-4f01-a598-554f97be2872"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.625100 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c443c21f-e6ff-4f01-a598-554f97be2872-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.625155 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5kxl\" (UniqueName: \"kubernetes.io/projected/c443c21f-e6ff-4f01-a598-554f97be2872-kube-api-access-g5kxl\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.625167 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c443c21f-e6ff-4f01-a598-554f97be2872-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.661977 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66f57f4546-f9rcd" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.760879 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66f57f4546-f9rcd" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.850970 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" event={"ID":"c443c21f-e6ff-4f01-a598-554f97be2872","Type":"ContainerDied","Data":"5c7d2c898f67cd22313ca3ea16702a7ab2fe3ad5eac75647b2290d6df66a71d2"} Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.851006 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7d2c898f67cd22313ca3ea16702a7ab2fe3ad5eac75647b2290d6df66a71d2" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.851077 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4" Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.861545 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerStarted","Data":"3bf2a8092b8fd5f071dfa7a4fb0bf933e8c96c280e5d059baf03813d788a2d55"} Nov 23 07:00:04 crc kubenswrapper[4681]: I1123 07:00:04.869128 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:00:05 crc kubenswrapper[4681]: I1123 07:00:05.260391 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8" path="/var/lib/kubelet/pods/7a5e9c77-80c4-4040-91e1-2b39e8ec2cf8/volumes" Nov 23 07:00:05 crc kubenswrapper[4681]: I1123 07:00:05.904919 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerStarted","Data":"a9df8b55b03c2396f8f3416768057a7ea2f2364ed1c8a6aa803736aa3fa88f73"} Nov 23 07:00:05 crc kubenswrapper[4681]: I1123 07:00:05.925256 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f293f80c-7ede-49a7-88d0-c6e41833a75a","Type":"ContainerStarted","Data":"f7a8bc96d18c5be0d066b12cc369e6db28d9495c8abf764f08690871dce93288"} Nov 23 07:00:06 crc kubenswrapper[4681]: I1123 07:00:06.646210 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 07:00:06 crc kubenswrapper[4681]: I1123 07:00:06.954684 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerStarted","Data":"49a1bc267ad6274abf7f72c92b89ebc5f76a11e2aee85273853d219601557b96"} Nov 23 07:00:06 crc kubenswrapper[4681]: I1123 07:00:06.964696 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f293f80c-7ede-49a7-88d0-c6e41833a75a","Type":"ContainerStarted","Data":"83f8fd815ee80db9d9be14c35eb0461b81684d3c46207693623abd2ced906fae"} Nov 23 07:00:07 crc kubenswrapper[4681]: I1123 07:00:07.596107 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 07:00:07 crc kubenswrapper[4681]: I1123 07:00:07.616116 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 07:00:07 crc kubenswrapper[4681]: I1123 07:00:07.932625 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 07:00:07 crc kubenswrapper[4681]: I1123 07:00:07.974012 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f293f80c-7ede-49a7-88d0-c6e41833a75a","Type":"ContainerStarted","Data":"d0077ee7b961595ca226e7c3c38b17a29827368a5531e0bcafcf761e39ab63f5"} Nov 23 07:00:07 crc kubenswrapper[4681]: I1123 07:00:07.975198 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 07:00:08 crc kubenswrapper[4681]: I1123 07:00:08.006280 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ccdb5d4d7-892kp"] Nov 23 07:00:08 crc kubenswrapper[4681]: I1123 07:00:08.006495 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerName="dnsmasq-dns" containerID="cri-o://5d3b9b18c19d40b4875a0795be196763470210354d7a0ac2916665447d7ced82" gracePeriod=10 Nov 23 07:00:08 crc kubenswrapper[4681]: I1123 07:00:08.023737 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.023716702 podStartE2EDuration="5.023716702s" podCreationTimestamp="2025-11-23 07:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:08.021639712 +0000 UTC m=+945.091148949" watchObservedRunningTime="2025-11-23 07:00:08.023716702 +0000 UTC m=+945.093225939" Nov 23 07:00:08 crc kubenswrapper[4681]: I1123 07:00:08.294568 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 07:00:08 crc kubenswrapper[4681]: I1123 07:00:08.381609 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.157:5353: connect: connection refused" Nov 23 07:00:08 crc kubenswrapper[4681]: I1123 07:00:08.587723 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b7dd84c8b-57zgx" podUID="69085e5b-69b6-421a-aaa2-066bb27620d1" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:00:08 crc kubenswrapper[4681]: I1123 07:00:08.773704 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.004679 4681 generic.go:334] "Generic (PLEG): container finished" podID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerID="5d3b9b18c19d40b4875a0795be196763470210354d7a0ac2916665447d7ced82" exitCode=0 Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.004744 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" event={"ID":"ce77efdd-12fa-4f7c-9268-05c1634d7da3","Type":"ContainerDied","Data":"5d3b9b18c19d40b4875a0795be196763470210354d7a0ac2916665447d7ced82"} Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.004775 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" event={"ID":"ce77efdd-12fa-4f7c-9268-05c1634d7da3","Type":"ContainerDied","Data":"6068fbdae34c136b157e88273284f026770f032e0ee0dff0e2204c2d34b94d54"} Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.004788 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6068fbdae34c136b157e88273284f026770f032e0ee0dff0e2204c2d34b94d54" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.036520 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerStarted","Data":"f01edf17be5051ba24728ae15b0d1a97d56b1e8fe48e616e405b94c28281321f"} Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.036603 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.051028 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.081604 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.998921987 podStartE2EDuration="8.081587012s" podCreationTimestamp="2025-11-23 07:00:01 +0000 UTC" firstStartedPulling="2025-11-23 07:00:02.71885097 +0000 UTC m=+939.788360198" lastFinishedPulling="2025-11-23 07:00:07.801515986 +0000 UTC m=+944.871025223" observedRunningTime="2025-11-23 07:00:09.072781112 +0000 UTC m=+946.142290349" watchObservedRunningTime="2025-11-23 07:00:09.081587012 +0000 UTC m=+946.151096249" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.139613 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-svc\") pod \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.139701 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcgnr\" (UniqueName: \"kubernetes.io/projected/ce77efdd-12fa-4f7c-9268-05c1634d7da3-kube-api-access-tcgnr\") pod \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.139827 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-sb\") pod \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.139846 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-swift-storage-0\") pod \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.139946 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-nb\") pod \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.140007 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-config\") pod \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\" (UID: \"ce77efdd-12fa-4f7c-9268-05c1634d7da3\") " Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.159726 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce77efdd-12fa-4f7c-9268-05c1634d7da3-kube-api-access-tcgnr" (OuterVolumeSpecName: "kube-api-access-tcgnr") pod "ce77efdd-12fa-4f7c-9268-05c1634d7da3" (UID: "ce77efdd-12fa-4f7c-9268-05c1634d7da3"). InnerVolumeSpecName "kube-api-access-tcgnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.234008 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce77efdd-12fa-4f7c-9268-05c1634d7da3" (UID: "ce77efdd-12fa-4f7c-9268-05c1634d7da3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.243007 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcgnr\" (UniqueName: \"kubernetes.io/projected/ce77efdd-12fa-4f7c-9268-05c1634d7da3-kube-api-access-tcgnr\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.243035 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.244316 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce77efdd-12fa-4f7c-9268-05c1634d7da3" (UID: "ce77efdd-12fa-4f7c-9268-05c1634d7da3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.245937 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce77efdd-12fa-4f7c-9268-05c1634d7da3" (UID: "ce77efdd-12fa-4f7c-9268-05c1634d7da3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.263006 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce77efdd-12fa-4f7c-9268-05c1634d7da3" (UID: "ce77efdd-12fa-4f7c-9268-05c1634d7da3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.291664 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-config" (OuterVolumeSpecName: "config") pod "ce77efdd-12fa-4f7c-9268-05c1634d7da3" (UID: "ce77efdd-12fa-4f7c-9268-05c1634d7da3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.344680 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.344730 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.344743 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.344757 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce77efdd-12fa-4f7c-9268-05c1634d7da3-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.445920 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54b559f6bf-jcd2p" Nov 23 07:00:09 crc kubenswrapper[4681]: I1123 07:00:09.953905 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.083733 4681 generic.go:334] "Generic (PLEG): container finished" podID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerID="7cc2ab3f82b6b7f29bfde6f35b40da9fdbb3b525f8acef809a105402bf70e395" exitCode=0 Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.083820 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ccdb5d4d7-892kp" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.088425 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759dcb765b-std9h" event={"ID":"abe896c0-87f4-4c4c-b23a-81a10a557aed","Type":"ContainerDied","Data":"7cc2ab3f82b6b7f29bfde6f35b40da9fdbb3b525f8acef809a105402bf70e395"} Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.141425 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759dcb765b-std9h" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.221306 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ccdb5d4d7-892kp"] Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.230937 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ccdb5d4d7-892kp"] Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.273639 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdnh\" (UniqueName: \"kubernetes.io/projected/abe896c0-87f4-4c4c-b23a-81a10a557aed-kube-api-access-vxdnh\") pod \"abe896c0-87f4-4c4c-b23a-81a10a557aed\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.273804 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-httpd-config\") pod \"abe896c0-87f4-4c4c-b23a-81a10a557aed\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.273964 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-config\") pod \"abe896c0-87f4-4c4c-b23a-81a10a557aed\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.274498 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-combined-ca-bundle\") pod \"abe896c0-87f4-4c4c-b23a-81a10a557aed\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.274808 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-ovndb-tls-certs\") pod \"abe896c0-87f4-4c4c-b23a-81a10a557aed\" (UID: \"abe896c0-87f4-4c4c-b23a-81a10a557aed\") " Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.293642 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "abe896c0-87f4-4c4c-b23a-81a10a557aed" (UID: "abe896c0-87f4-4c4c-b23a-81a10a557aed"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.311322 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe896c0-87f4-4c4c-b23a-81a10a557aed-kube-api-access-vxdnh" (OuterVolumeSpecName: "kube-api-access-vxdnh") pod "abe896c0-87f4-4c4c-b23a-81a10a557aed" (UID: "abe896c0-87f4-4c4c-b23a-81a10a557aed"). InnerVolumeSpecName "kube-api-access-vxdnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.377346 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdnh\" (UniqueName: \"kubernetes.io/projected/abe896c0-87f4-4c4c-b23a-81a10a557aed-kube-api-access-vxdnh\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.377659 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.409645 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "abe896c0-87f4-4c4c-b23a-81a10a557aed" (UID: "abe896c0-87f4-4c4c-b23a-81a10a557aed"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.427571 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-config" (OuterVolumeSpecName: "config") pod "abe896c0-87f4-4c4c-b23a-81a10a557aed" (UID: "abe896c0-87f4-4c4c-b23a-81a10a557aed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.472256 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abe896c0-87f4-4c4c-b23a-81a10a557aed" (UID: "abe896c0-87f4-4c4c-b23a-81a10a557aed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.480155 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.480191 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:10 crc kubenswrapper[4681]: I1123 07:00:10.480203 4681 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe896c0-87f4-4c4c-b23a-81a10a557aed-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.096671 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759dcb765b-std9h" event={"ID":"abe896c0-87f4-4c4c-b23a-81a10a557aed","Type":"ContainerDied","Data":"836e9d59fe78695ab2f6efe33e5045d1483ae7f356ffc0836841859f7044a265"} Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.097021 4681 scope.go:117] "RemoveContainer" containerID="aa66d58e3366f90d416b2c24908e0e060d82706229b4fad9d8e1cd986edae3bf" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.096793 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759dcb765b-std9h" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.134473 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-759dcb765b-std9h"] Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.139705 4681 scope.go:117] "RemoveContainer" containerID="7cc2ab3f82b6b7f29bfde6f35b40da9fdbb3b525f8acef809a105402bf70e395" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.145354 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-759dcb765b-std9h"] Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.260563 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" path="/var/lib/kubelet/pods/abe896c0-87f4-4c4c-b23a-81a10a557aed/volumes" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.261231 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" path="/var/lib/kubelet/pods/ce77efdd-12fa-4f7c-9268-05c1634d7da3/volumes" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.476727 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b7dd84c8b-57zgx" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.545315 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bb6dddd54-bttkq"] Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.545469 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.545590 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" containerID="cri-o://3633fb223f3e68fe2602637082e2017e270a3ef6347d5aee46024a89ba1c39db" gracePeriod=30 Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.545726 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" containerID="cri-o://686d1e2adbe17aaea30b12fb22e427bff70017300747d7a15e4b5854e1e362bd" gracePeriod=30 Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.562298 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.562541 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.562567 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Nov 23 07:00:11 crc kubenswrapper[4681]: I1123 07:00:11.562593 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Nov 23 07:00:12 crc kubenswrapper[4681]: I1123 07:00:12.110365 4681 generic.go:334] "Generic (PLEG): container finished" podID="97338e7f-0f80-4f47-905f-59df8aef837b" containerID="3633fb223f3e68fe2602637082e2017e270a3ef6347d5aee46024a89ba1c39db" exitCode=143 Nov 23 07:00:12 crc kubenswrapper[4681]: I1123 07:00:12.110448 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb6dddd54-bttkq" event={"ID":"97338e7f-0f80-4f47-905f-59df8aef837b","Type":"ContainerDied","Data":"3633fb223f3e68fe2602637082e2017e270a3ef6347d5aee46024a89ba1c39db"} Nov 23 07:00:12 crc kubenswrapper[4681]: I1123 07:00:12.295560 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:00:12 crc kubenswrapper[4681]: I1123 07:00:12.295667 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:00:12 crc kubenswrapper[4681]: I1123 07:00:12.614282 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 07:00:12 crc kubenswrapper[4681]: I1123 07:00:12.666498 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.092560 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 07:00:13 crc kubenswrapper[4681]: E1123 07:00:13.092976 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-httpd" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.092992 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-httpd" Nov 23 07:00:13 crc kubenswrapper[4681]: E1123 07:00:13.093017 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerName="dnsmasq-dns" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093024 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerName="dnsmasq-dns" Nov 23 07:00:13 crc kubenswrapper[4681]: E1123 07:00:13.093038 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c443c21f-e6ff-4f01-a598-554f97be2872" containerName="collect-profiles" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093044 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c443c21f-e6ff-4f01-a598-554f97be2872" containerName="collect-profiles" Nov 23 07:00:13 crc kubenswrapper[4681]: E1123 07:00:13.093059 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerName="init" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093064 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerName="init" Nov 23 07:00:13 crc kubenswrapper[4681]: E1123 07:00:13.093074 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-api" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093080 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-api" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093268 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-httpd" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093286 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce77efdd-12fa-4f7c-9268-05c1634d7da3" containerName="dnsmasq-dns" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093294 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c443c21f-e6ff-4f01-a598-554f97be2872" containerName="collect-profiles" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.093303 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe896c0-87f4-4c4c-b23a-81a10a557aed" containerName="neutron-api" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.094006 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.096562 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pd852" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.096854 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.097066 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.105047 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.130953 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="cinder-scheduler" containerID="cri-o://a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0" gracePeriod=30 Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.131324 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="probe" containerID="cri-o://80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45" gracePeriod=30 Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.235868 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9p7\" (UniqueName: \"kubernetes.io/projected/03d2f9d1-c437-447b-a2f4-c2994aad12ee-kube-api-access-7v9p7\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.236035 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2f9d1-c437-447b-a2f4-c2994aad12ee-combined-ca-bundle\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.236131 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/03d2f9d1-c437-447b-a2f4-c2994aad12ee-openstack-config\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.236160 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/03d2f9d1-c437-447b-a2f4-c2994aad12ee-openstack-config-secret\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.339154 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2f9d1-c437-447b-a2f4-c2994aad12ee-combined-ca-bundle\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.339252 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/03d2f9d1-c437-447b-a2f4-c2994aad12ee-openstack-config\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.339300 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/03d2f9d1-c437-447b-a2f4-c2994aad12ee-openstack-config-secret\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.340327 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/03d2f9d1-c437-447b-a2f4-c2994aad12ee-openstack-config\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.340430 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v9p7\" (UniqueName: \"kubernetes.io/projected/03d2f9d1-c437-447b-a2f4-c2994aad12ee-kube-api-access-7v9p7\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.346941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/03d2f9d1-c437-447b-a2f4-c2994aad12ee-openstack-config-secret\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.352154 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2f9d1-c437-447b-a2f4-c2994aad12ee-combined-ca-bundle\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.362934 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v9p7\" (UniqueName: \"kubernetes.io/projected/03d2f9d1-c437-447b-a2f4-c2994aad12ee-kube-api-access-7v9p7\") pod \"openstackclient\" (UID: \"03d2f9d1-c437-447b-a2f4-c2994aad12ee\") " pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.408674 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:00:13 crc kubenswrapper[4681]: I1123 07:00:13.941192 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.141492 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"03d2f9d1-c437-447b-a2f4-c2994aad12ee","Type":"ContainerStarted","Data":"881f414df62b3a19ac6c63856d62ca20f23f1d878b5243054caa70954be1f99f"} Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.145024 4681 generic.go:334] "Generic (PLEG): container finished" podID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerID="80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45" exitCode=0 Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.145055 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8405966-0c4a-42eb-bed4-6f6ae19bff63","Type":"ContainerDied","Data":"80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45"} Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.523061 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.680456 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-scripts\") pod \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.680514 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data\") pod \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.680547 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8405966-0c4a-42eb-bed4-6f6ae19bff63-etc-machine-id\") pod \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.680571 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data-custom\") pod \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.680622 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtd8\" (UniqueName: \"kubernetes.io/projected/d8405966-0c4a-42eb-bed4-6f6ae19bff63-kube-api-access-nqtd8\") pod \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.680693 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-combined-ca-bundle\") pod \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\" (UID: \"d8405966-0c4a-42eb-bed4-6f6ae19bff63\") " Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.681211 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8405966-0c4a-42eb-bed4-6f6ae19bff63-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d8405966-0c4a-42eb-bed4-6f6ae19bff63" (UID: "d8405966-0c4a-42eb-bed4-6f6ae19bff63"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.688014 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d8405966-0c4a-42eb-bed4-6f6ae19bff63" (UID: "d8405966-0c4a-42eb-bed4-6f6ae19bff63"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.691176 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-scripts" (OuterVolumeSpecName: "scripts") pod "d8405966-0c4a-42eb-bed4-6f6ae19bff63" (UID: "d8405966-0c4a-42eb-bed4-6f6ae19bff63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.703861 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8405966-0c4a-42eb-bed4-6f6ae19bff63-kube-api-access-nqtd8" (OuterVolumeSpecName: "kube-api-access-nqtd8") pod "d8405966-0c4a-42eb-bed4-6f6ae19bff63" (UID: "d8405966-0c4a-42eb-bed4-6f6ae19bff63"). InnerVolumeSpecName "kube-api-access-nqtd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.747517 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8405966-0c4a-42eb-bed4-6f6ae19bff63" (UID: "d8405966-0c4a-42eb-bed4-6f6ae19bff63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.783310 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.783488 4681 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8405966-0c4a-42eb-bed4-6f6ae19bff63-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.783553 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.783603 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqtd8\" (UniqueName: \"kubernetes.io/projected/d8405966-0c4a-42eb-bed4-6f6ae19bff63-kube-api-access-nqtd8\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.783664 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.800955 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data" (OuterVolumeSpecName: "config-data") pod "d8405966-0c4a-42eb-bed4-6f6ae19bff63" (UID: "d8405966-0c4a-42eb-bed4-6f6ae19bff63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:14 crc kubenswrapper[4681]: I1123 07:00:14.887004 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8405966-0c4a-42eb-bed4-6f6ae19bff63-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.159029 4681 generic.go:334] "Generic (PLEG): container finished" podID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerID="a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0" exitCode=0 Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.159082 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8405966-0c4a-42eb-bed4-6f6ae19bff63","Type":"ContainerDied","Data":"a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0"} Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.159123 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8405966-0c4a-42eb-bed4-6f6ae19bff63","Type":"ContainerDied","Data":"0150ea11abf5cebed2f2443922f278ad975507f1038835613a985212f6c257b5"} Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.159132 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.159145 4681 scope.go:117] "RemoveContainer" containerID="80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.189658 4681 scope.go:117] "RemoveContainer" containerID="a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.190519 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.210776 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.228518 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:00:15 crc kubenswrapper[4681]: E1123 07:00:15.229030 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="probe" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.229044 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="probe" Nov 23 07:00:15 crc kubenswrapper[4681]: E1123 07:00:15.229061 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="cinder-scheduler" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.229067 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="cinder-scheduler" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.229263 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="cinder-scheduler" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.229283 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" containerName="probe" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.230371 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.234250 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.239825 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.253424 4681 scope.go:117] "RemoveContainer" containerID="80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45" Nov 23 07:00:15 crc kubenswrapper[4681]: E1123 07:00:15.259599 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45\": container with ID starting with 80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45 not found: ID does not exist" containerID="80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.259638 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45"} err="failed to get container status \"80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45\": rpc error: code = NotFound desc = could not find container \"80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45\": container with ID starting with 80c7e6b21d06301279a5de5f898c444b34afc1fb55df414c1149abb3b481fc45 not found: ID does not exist" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.259670 4681 scope.go:117] "RemoveContainer" containerID="a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0" Nov 23 07:00:15 crc kubenswrapper[4681]: E1123 07:00:15.261267 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0\": container with ID starting with a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0 not found: ID does not exist" containerID="a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.261322 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0"} err="failed to get container status \"a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0\": rpc error: code = NotFound desc = could not find container \"a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0\": container with ID starting with a66d39ba6a1af1c7783dca3a4f717993233ecf7a83ce80886ddb0c86398a2ab0 not found: ID does not exist" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.296587 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8405966-0c4a-42eb-bed4-6f6ae19bff63" path="/var/lib/kubelet/pods/d8405966-0c4a-42eb-bed4-6f6ae19bff63/volumes" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.399701 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.400148 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869f5a36-2097-42b1-baa4-b641a0da959a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.400194 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-config-data\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.400218 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.400244 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-scripts\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.400744 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh7nm\" (UniqueName: \"kubernetes.io/projected/869f5a36-2097-42b1-baa4-b641a0da959a-kube-api-access-vh7nm\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.502941 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh7nm\" (UniqueName: \"kubernetes.io/projected/869f5a36-2097-42b1-baa4-b641a0da959a-kube-api-access-vh7nm\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.503085 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.503154 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869f5a36-2097-42b1-baa4-b641a0da959a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.503194 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-config-data\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.503214 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.503242 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-scripts\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.503332 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869f5a36-2097-42b1-baa4-b641a0da959a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.518230 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-scripts\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.526935 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh7nm\" (UniqueName: \"kubernetes.io/projected/869f5a36-2097-42b1-baa4-b641a0da959a-kube-api-access-vh7nm\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.527729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-config-data\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.528091 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.528547 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869f5a36-2097-42b1-baa4-b641a0da959a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"869f5a36-2097-42b1-baa4-b641a0da959a\") " pod="openstack/cinder-scheduler-0" Nov 23 07:00:15 crc kubenswrapper[4681]: I1123 07:00:15.560180 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:00:16 crc kubenswrapper[4681]: I1123 07:00:16.048455 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:00:16 crc kubenswrapper[4681]: W1123 07:00:16.058297 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod869f5a36_2097_42b1_baa4_b641a0da959a.slice/crio-abc4b902ef6b26d470edb0b10ffd1ff495cd34fb258f454835c3a0c8e57128af WatchSource:0}: Error finding container abc4b902ef6b26d470edb0b10ffd1ff495cd34fb258f454835c3a0c8e57128af: Status 404 returned error can't find the container with id abc4b902ef6b26d470edb0b10ffd1ff495cd34fb258f454835c3a0c8e57128af Nov 23 07:00:16 crc kubenswrapper[4681]: I1123 07:00:16.173781 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"869f5a36-2097-42b1-baa4-b641a0da959a","Type":"ContainerStarted","Data":"abc4b902ef6b26d470edb0b10ffd1ff495cd34fb258f454835c3a0c8e57128af"} Nov 23 07:00:16 crc kubenswrapper[4681]: I1123 07:00:16.542869 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 07:00:16 crc kubenswrapper[4681]: I1123 07:00:16.646192 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:00:16 crc kubenswrapper[4681]: I1123 07:00:16.646578 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.048001 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:54878->10.217.0.167:9311: read: connection reset by peer" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.048611 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.048075 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:54858->10.217.0.167:9311: read: connection reset by peer" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.048008 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bb6dddd54-bttkq" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:54872->10.217.0.167:9311: read: connection reset by peer" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.315850 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"869f5a36-2097-42b1-baa4-b641a0da959a","Type":"ContainerStarted","Data":"28958714e2b1f8da714884ecb851b2b7bf9c8f151dcbf2072558557b574e3b05"} Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.319556 4681 generic.go:334] "Generic (PLEG): container finished" podID="97338e7f-0f80-4f47-905f-59df8aef837b" containerID="686d1e2adbe17aaea30b12fb22e427bff70017300747d7a15e4b5854e1e362bd" exitCode=0 Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.319598 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb6dddd54-bttkq" event={"ID":"97338e7f-0f80-4f47-905f-59df8aef837b","Type":"ContainerDied","Data":"686d1e2adbe17aaea30b12fb22e427bff70017300747d7a15e4b5854e1e362bd"} Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.850863 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.892157 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97338e7f-0f80-4f47-905f-59df8aef837b-logs\") pod \"97338e7f-0f80-4f47-905f-59df8aef837b\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.892215 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pt5w\" (UniqueName: \"kubernetes.io/projected/97338e7f-0f80-4f47-905f-59df8aef837b-kube-api-access-7pt5w\") pod \"97338e7f-0f80-4f47-905f-59df8aef837b\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.892248 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data\") pod \"97338e7f-0f80-4f47-905f-59df8aef837b\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.892274 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-combined-ca-bundle\") pod \"97338e7f-0f80-4f47-905f-59df8aef837b\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.892343 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data-custom\") pod \"97338e7f-0f80-4f47-905f-59df8aef837b\" (UID: \"97338e7f-0f80-4f47-905f-59df8aef837b\") " Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.893529 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97338e7f-0f80-4f47-905f-59df8aef837b-logs" (OuterVolumeSpecName: "logs") pod "97338e7f-0f80-4f47-905f-59df8aef837b" (UID: "97338e7f-0f80-4f47-905f-59df8aef837b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.902092 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97338e7f-0f80-4f47-905f-59df8aef837b-kube-api-access-7pt5w" (OuterVolumeSpecName: "kube-api-access-7pt5w") pod "97338e7f-0f80-4f47-905f-59df8aef837b" (UID: "97338e7f-0f80-4f47-905f-59df8aef837b"). InnerVolumeSpecName "kube-api-access-7pt5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.909620 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97338e7f-0f80-4f47-905f-59df8aef837b" (UID: "97338e7f-0f80-4f47-905f-59df8aef837b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.937305 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97338e7f-0f80-4f47-905f-59df8aef837b" (UID: "97338e7f-0f80-4f47-905f-59df8aef837b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.959604 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data" (OuterVolumeSpecName: "config-data") pod "97338e7f-0f80-4f47-905f-59df8aef837b" (UID: "97338e7f-0f80-4f47-905f-59df8aef837b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.993637 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.993662 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97338e7f-0f80-4f47-905f-59df8aef837b-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.993672 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pt5w\" (UniqueName: \"kubernetes.io/projected/97338e7f-0f80-4f47-905f-59df8aef837b-kube-api-access-7pt5w\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.993682 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:17 crc kubenswrapper[4681]: I1123 07:00:17.993690 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97338e7f-0f80-4f47-905f-59df8aef837b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.330113 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"869f5a36-2097-42b1-baa4-b641a0da959a","Type":"ContainerStarted","Data":"f078c2d0f6d09016e87a79b2d528770d4064bea71f30c8575fedffe487b8069d"} Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.333923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bb6dddd54-bttkq" event={"ID":"97338e7f-0f80-4f47-905f-59df8aef837b","Type":"ContainerDied","Data":"3ff02afb8eae34792434ff80f493a342af21818e36b1fa8b0a85c80d7936bfe9"} Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.333970 4681 scope.go:117] "RemoveContainer" containerID="686d1e2adbe17aaea30b12fb22e427bff70017300747d7a15e4b5854e1e362bd" Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.334058 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bb6dddd54-bttkq" Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.355022 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.355002498 podStartE2EDuration="3.355002498s" podCreationTimestamp="2025-11-23 07:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:18.349726019 +0000 UTC m=+955.419235246" watchObservedRunningTime="2025-11-23 07:00:18.355002498 +0000 UTC m=+955.424511735" Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.373731 4681 scope.go:117] "RemoveContainer" containerID="3633fb223f3e68fe2602637082e2017e270a3ef6347d5aee46024a89ba1c39db" Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.376263 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bb6dddd54-bttkq"] Nov 23 07:00:18 crc kubenswrapper[4681]: I1123 07:00:18.381699 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6bb6dddd54-bttkq"] Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.260705 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" path="/var/lib/kubelet/pods/97338e7f-0f80-4f47-905f-59df8aef837b/volumes" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.569059 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-75b4b57dcf-bqmc5"] Nov 23 07:00:19 crc kubenswrapper[4681]: E1123 07:00:19.569551 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.569570 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" Nov 23 07:00:19 crc kubenswrapper[4681]: E1123 07:00:19.569583 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.569590 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.569763 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.569784 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="97338e7f-0f80-4f47-905f-59df8aef837b" containerName="barbican-api-log" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.570824 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.574407 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.574742 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.580883 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.607095 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75b4b57dcf-bqmc5"] Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625039 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-log-httpd\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625195 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-combined-ca-bundle\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625302 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-run-httpd\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625385 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-etc-swift\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625565 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-internal-tls-certs\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625649 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2nz\" (UniqueName: \"kubernetes.io/projected/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-kube-api-access-hh2nz\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625842 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-public-tls-certs\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.625987 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-config-data\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728352 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-log-httpd\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728402 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-combined-ca-bundle\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-run-httpd\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728494 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-etc-swift\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728520 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-internal-tls-certs\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728545 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2nz\" (UniqueName: \"kubernetes.io/projected/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-kube-api-access-hh2nz\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-public-tls-certs\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.728616 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-config-data\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.729194 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-run-httpd\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.729771 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-log-httpd\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.739855 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-public-tls-certs\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.740613 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-internal-tls-certs\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.740700 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-config-data\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.743041 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-etc-swift\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.745757 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2nz\" (UniqueName: \"kubernetes.io/projected/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-kube-api-access-hh2nz\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.756123 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ec0b0d-3fb3-4710-8be4-acb8bb895d42-combined-ca-bundle\") pod \"swift-proxy-75b4b57dcf-bqmc5\" (UID: \"91ec0b0d-3fb3-4710-8be4-acb8bb895d42\") " pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:19 crc kubenswrapper[4681]: I1123 07:00:19.900784 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:20 crc kubenswrapper[4681]: I1123 07:00:20.503411 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75b4b57dcf-bqmc5"] Nov 23 07:00:20 crc kubenswrapper[4681]: I1123 07:00:20.562215 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 07:00:21 crc kubenswrapper[4681]: I1123 07:00:21.365355 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" event={"ID":"91ec0b0d-3fb3-4710-8be4-acb8bb895d42","Type":"ContainerStarted","Data":"7e0a2c51fc1adb454c2ac42d36c0d78dff01fdfeeff0b92925726ed2a1a1edb1"} Nov 23 07:00:21 crc kubenswrapper[4681]: I1123 07:00:21.365798 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:21 crc kubenswrapper[4681]: I1123 07:00:21.365811 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" event={"ID":"91ec0b0d-3fb3-4710-8be4-acb8bb895d42","Type":"ContainerStarted","Data":"6724576d61ef498a8a84b834cd04fe220b552bae373904ba9c571ddd3f656313"} Nov 23 07:00:21 crc kubenswrapper[4681]: I1123 07:00:21.365820 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" event={"ID":"91ec0b0d-3fb3-4710-8be4-acb8bb895d42","Type":"ContainerStarted","Data":"da22eac0a1042d90364889dd71bcb35de09a38510c3c0f2fa5e908f8589f2f4c"} Nov 23 07:00:21 crc kubenswrapper[4681]: I1123 07:00:21.365830 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:23 crc kubenswrapper[4681]: I1123 07:00:23.281065 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" podStartSLOduration=4.281044141 podStartE2EDuration="4.281044141s" podCreationTimestamp="2025-11-23 07:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:21.384804381 +0000 UTC m=+958.454313618" watchObservedRunningTime="2025-11-23 07:00:23.281044141 +0000 UTC m=+960.350553378" Nov 23 07:00:23 crc kubenswrapper[4681]: I1123 07:00:23.948808 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:23 crc kubenswrapper[4681]: I1123 07:00:23.949647 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-central-agent" containerID="cri-o://3bf2a8092b8fd5f071dfa7a4fb0bf933e8c96c280e5d059baf03813d788a2d55" gracePeriod=30 Nov 23 07:00:23 crc kubenswrapper[4681]: I1123 07:00:23.949876 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="proxy-httpd" containerID="cri-o://f01edf17be5051ba24728ae15b0d1a97d56b1e8fe48e616e405b94c28281321f" gracePeriod=30 Nov 23 07:00:23 crc kubenswrapper[4681]: I1123 07:00:23.949937 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="sg-core" containerID="cri-o://49a1bc267ad6274abf7f72c92b89ebc5f76a11e2aee85273853d219601557b96" gracePeriod=30 Nov 23 07:00:23 crc kubenswrapper[4681]: I1123 07:00:23.949978 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-notification-agent" containerID="cri-o://a9df8b55b03c2396f8f3416768057a7ea2f2364ed1c8a6aa803736aa3fa88f73" gracePeriod=30 Nov 23 07:00:23 crc kubenswrapper[4681]: I1123 07:00:23.977941 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.173:3000/\": EOF" Nov 23 07:00:24 crc kubenswrapper[4681]: I1123 07:00:24.412158 4681 generic.go:334] "Generic (PLEG): container finished" podID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerID="f01edf17be5051ba24728ae15b0d1a97d56b1e8fe48e616e405b94c28281321f" exitCode=0 Nov 23 07:00:24 crc kubenswrapper[4681]: I1123 07:00:24.412569 4681 generic.go:334] "Generic (PLEG): container finished" podID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerID="49a1bc267ad6274abf7f72c92b89ebc5f76a11e2aee85273853d219601557b96" exitCode=2 Nov 23 07:00:24 crc kubenswrapper[4681]: I1123 07:00:24.412579 4681 generic.go:334] "Generic (PLEG): container finished" podID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerID="3bf2a8092b8fd5f071dfa7a4fb0bf933e8c96c280e5d059baf03813d788a2d55" exitCode=0 Nov 23 07:00:24 crc kubenswrapper[4681]: I1123 07:00:24.412314 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerDied","Data":"f01edf17be5051ba24728ae15b0d1a97d56b1e8fe48e616e405b94c28281321f"} Nov 23 07:00:24 crc kubenswrapper[4681]: I1123 07:00:24.412662 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerDied","Data":"49a1bc267ad6274abf7f72c92b89ebc5f76a11e2aee85273853d219601557b96"} Nov 23 07:00:24 crc kubenswrapper[4681]: I1123 07:00:24.412687 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerDied","Data":"3bf2a8092b8fd5f071dfa7a4fb0bf933e8c96c280e5d059baf03813d788a2d55"} Nov 23 07:00:25 crc kubenswrapper[4681]: I1123 07:00:25.429520 4681 generic.go:334] "Generic (PLEG): container finished" podID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerID="a9df8b55b03c2396f8f3416768057a7ea2f2364ed1c8a6aa803736aa3fa88f73" exitCode=0 Nov 23 07:00:25 crc kubenswrapper[4681]: I1123 07:00:25.429613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerDied","Data":"a9df8b55b03c2396f8f3416768057a7ea2f2364ed1c8a6aa803736aa3fa88f73"} Nov 23 07:00:25 crc kubenswrapper[4681]: I1123 07:00:25.768930 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.156752 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.157244 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-log" containerID="cri-o://97cf1e2c2dc5490b7dccaeb1542e6282ce33d29ac43281d21569cfed720f97eb" gracePeriod=30 Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.157414 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-httpd" containerID="cri-o://884e2a56b0230e733fad802ba25fca0312606baab59fc36a9a13c7175936d99a" gracePeriod=30 Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.457685 4681 generic.go:334] "Generic (PLEG): container finished" podID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerID="97cf1e2c2dc5490b7dccaeb1542e6282ce33d29ac43281d21569cfed720f97eb" exitCode=143 Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.457736 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48ecd863-12ce-4eb3-ba76-eea730db3b2d","Type":"ContainerDied","Data":"97cf1e2c2dc5490b7dccaeb1542e6282ce33d29ac43281d21569cfed720f97eb"} Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.839699 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6bc44c9bc7-bkrp7"] Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.841172 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.847267 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.848077 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6jrks" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.848226 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.860433 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6bc44c9bc7-bkrp7"] Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.921980 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data-custom\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.922531 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.922596 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-combined-ca-bundle\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.922815 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmtg\" (UniqueName: \"kubernetes.io/projected/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-kube-api-access-jtmtg\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.971583 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8569478495-vj5pz"] Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.972672 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.976220 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 23 07:00:27 crc kubenswrapper[4681]: I1123 07:00:27.989658 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8569478495-vj5pz"] Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.022789 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cfb689747-vscpn"] Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.024844 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025493 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct66\" (UniqueName: \"kubernetes.io/projected/87f9cbe6-025e-4880-9c22-f3f0c8373284-kube-api-access-7ct66\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025537 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data-custom\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025626 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data-custom\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025649 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025705 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-combined-ca-bundle\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025755 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtmtg\" (UniqueName: \"kubernetes.io/projected/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-kube-api-access-jtmtg\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.025788 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-combined-ca-bundle\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.035901 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-combined-ca-bundle\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.036183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.041414 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data-custom\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.055557 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cfb689747-vscpn"] Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.092218 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtmtg\" (UniqueName: \"kubernetes.io/projected/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-kube-api-access-jtmtg\") pod \"heat-engine-6bc44c9bc7-bkrp7\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.106102 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5569487b4-2rc76"] Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.107503 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.109528 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5569487b4-2rc76"] Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.111956 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.127906 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ct66\" (UniqueName: \"kubernetes.io/projected/87f9cbe6-025e-4880-9c22-f3f0c8373284-kube-api-access-7ct66\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.127943 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.127976 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkzc\" (UniqueName: \"kubernetes.io/projected/94898d17-ee5b-4035-aff2-db846fcfa5f7-kube-api-access-5qkzc\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128002 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg9p2\" (UniqueName: \"kubernetes.io/projected/2475c700-0817-4d27-9e05-0b04cf845474-kube-api-access-pg9p2\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128035 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-combined-ca-bundle\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128080 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data-custom\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128104 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-config\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128138 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-nb\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128166 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128180 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data-custom\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128212 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-sb\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128298 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-svc\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128354 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-combined-ca-bundle\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.128398 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-swift-storage-0\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.139543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-combined-ca-bundle\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.145923 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data-custom\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.146739 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.174243 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ct66\" (UniqueName: \"kubernetes.io/projected/87f9cbe6-025e-4880-9c22-f3f0c8373284-kube-api-access-7ct66\") pod \"heat-cfnapi-8569478495-vj5pz\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.184567 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230433 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-config\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230524 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-nb\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230564 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data-custom\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230578 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230620 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-sb\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230737 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-svc\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230821 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-swift-storage-0\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230888 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qkzc\" (UniqueName: \"kubernetes.io/projected/94898d17-ee5b-4035-aff2-db846fcfa5f7-kube-api-access-5qkzc\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230913 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg9p2\" (UniqueName: \"kubernetes.io/projected/2475c700-0817-4d27-9e05-0b04cf845474-kube-api-access-pg9p2\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.230961 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-combined-ca-bundle\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.235545 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-sb\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.236297 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-combined-ca-bundle\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.237031 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-config\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.239957 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.240016 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-swift-storage-0\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.241030 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data-custom\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.251763 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-nb\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.251986 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-svc\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.253515 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qkzc\" (UniqueName: \"kubernetes.io/projected/94898d17-ee5b-4035-aff2-db846fcfa5f7-kube-api-access-5qkzc\") pod \"dnsmasq-dns-cfb689747-vscpn\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.258558 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg9p2\" (UniqueName: \"kubernetes.io/projected/2475c700-0817-4d27-9e05-0b04cf845474-kube-api-access-pg9p2\") pod \"heat-api-5569487b4-2rc76\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.330420 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.512201 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.518566 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.826283 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.826662 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-log" containerID="cri-o://09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105" gracePeriod=30 Nov 23 07:00:28 crc kubenswrapper[4681]: I1123 07:00:28.826847 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-httpd" containerID="cri-o://282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4" gracePeriod=30 Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.645653 4681 generic.go:334] "Generic (PLEG): container finished" podID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerID="09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105" exitCode=143 Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.645817 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d296ac2-d00b-4a99-94e2-78004337f7e2","Type":"ContainerDied","Data":"09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105"} Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.655001 4681 generic.go:334] "Generic (PLEG): container finished" podID="bdfa433c-2b77-4373-877f-5c92a2b39fb8" containerID="f940cdcb178170ebf29c7591f70bc1b658fd92fed2c294459eb2f16f26d69ceb" exitCode=137 Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.655043 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcdb4576d-g8stp" event={"ID":"bdfa433c-2b77-4373-877f-5c92a2b39fb8","Type":"ContainerDied","Data":"f940cdcb178170ebf29c7591f70bc1b658fd92fed2c294459eb2f16f26d69ceb"} Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.702116 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.799497 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-run-httpd\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.799731 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-config-data\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.799800 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-scripts\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.799901 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.800048 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-sg-core-conf-yaml\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.800189 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-log-httpd\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.800237 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxstl\" (UniqueName: \"kubernetes.io/projected/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-kube-api-access-xxstl\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.801951 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.803356 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.823516 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-scripts" (OuterVolumeSpecName: "scripts") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.831703 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-kube-api-access-xxstl" (OuterVolumeSpecName: "kube-api-access-xxstl") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "kube-api-access-xxstl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.904325 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.904360 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.904371 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxstl\" (UniqueName: \"kubernetes.io/projected/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-kube-api-access-xxstl\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.904386 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.929928 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.936241 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.945713 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:29 crc kubenswrapper[4681]: I1123 07:00:29.987836 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6bc44c9bc7-bkrp7"] Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.006940 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.071666 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cfb689747-vscpn"] Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.140379 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-config-data" (OuterVolumeSpecName: "config-data") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.222548 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8569478495-vj5pz"] Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.226527 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.229103 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle\") pod \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\" (UID: \"3267d91e-9fae-46d3-ba7e-4d06fbe83e00\") " Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.230374 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:30 crc kubenswrapper[4681]: W1123 07:00:30.230504 4681 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3267d91e-9fae-46d3-ba7e-4d06fbe83e00/volumes/kubernetes.io~secret/combined-ca-bundle Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.230521 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3267d91e-9fae-46d3-ba7e-4d06fbe83e00" (UID: "3267d91e-9fae-46d3-ba7e-4d06fbe83e00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.238925 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5569487b4-2rc76"] Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.333253 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3267d91e-9fae-46d3-ba7e-4d06fbe83e00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.363529 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.160:9292/healthcheck\": read tcp 10.217.0.2:35250->10.217.0.160:9292: read: connection reset by peer" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.363561 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.160:9292/healthcheck\": read tcp 10.217.0.2:35258->10.217.0.160:9292: read: connection reset by peer" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.668792 4681 generic.go:334] "Generic (PLEG): container finished" podID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerID="827710c0adbf80e6ed797d938ea567b18186cf021fa5ad71b99e4bbbe741cd60" exitCode=0 Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.668890 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cfb689747-vscpn" event={"ID":"94898d17-ee5b-4035-aff2-db846fcfa5f7","Type":"ContainerDied","Data":"827710c0adbf80e6ed797d938ea567b18186cf021fa5ad71b99e4bbbe741cd60"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.668929 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cfb689747-vscpn" event={"ID":"94898d17-ee5b-4035-aff2-db846fcfa5f7","Type":"ContainerStarted","Data":"4726dbe1e5d2954e387c8083fec70624ee0daa9523be1be33e56f32861ab0a88"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.672087 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3267d91e-9fae-46d3-ba7e-4d06fbe83e00","Type":"ContainerDied","Data":"c4ffe6862f7bf4a72d0c51e38c9e9167cf62ae7abeaa3a2a5d528b128bbebb1b"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.672111 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.672170 4681 scope.go:117] "RemoveContainer" containerID="f01edf17be5051ba24728ae15b0d1a97d56b1e8fe48e616e405b94c28281321f" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.697068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcdb4576d-g8stp" event={"ID":"bdfa433c-2b77-4373-877f-5c92a2b39fb8","Type":"ContainerStarted","Data":"87f5e0aa6518d147805ab3370fec7f8c45d2ef0cd33b767dea446a4717ef7a98"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.709710 4681 scope.go:117] "RemoveContainer" containerID="49a1bc267ad6274abf7f72c92b89ebc5f76a11e2aee85273853d219601557b96" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.711028 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5569487b4-2rc76" event={"ID":"2475c700-0817-4d27-9e05-0b04cf845474","Type":"ContainerStarted","Data":"c70d7518df1359068670dbfcb0cde4bbc0f3a9ac63f6f11fb292fb89df948072"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.713305 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8569478495-vj5pz" event={"ID":"87f9cbe6-025e-4880-9c22-f3f0c8373284","Type":"ContainerStarted","Data":"244986082db3e8e83dd7652c715cb1177edc81b07a0ef0d76e2101a9128e78f6"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.730824 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.736173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"03d2f9d1-c437-447b-a2f4-c2994aad12ee","Type":"ContainerStarted","Data":"db741a07c20c4557716e129ffbb478046283ecafc9bf6fa5ce6108cf9ec8b24c"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.742210 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.751049 4681 generic.go:334] "Generic (PLEG): container finished" podID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerID="884e2a56b0230e733fad802ba25fca0312606baab59fc36a9a13c7175936d99a" exitCode=0 Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.751104 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48ecd863-12ce-4eb3-ba76-eea730db3b2d","Type":"ContainerDied","Data":"884e2a56b0230e733fad802ba25fca0312606baab59fc36a9a13c7175936d99a"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.768062 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" event={"ID":"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1","Type":"ContainerStarted","Data":"01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.768105 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.768117 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" event={"ID":"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1","Type":"ContainerStarted","Data":"b409e141d2f359b90eac470540fde2f906ab371e6d0e2cf3a6413d305fbd022e"} Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.770244 4681 scope.go:117] "RemoveContainer" containerID="a9df8b55b03c2396f8f3416768057a7ea2f2364ed1c8a6aa803736aa3fa88f73" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.799559 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:30 crc kubenswrapper[4681]: E1123 07:00:30.800023 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="sg-core" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800039 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="sg-core" Nov 23 07:00:30 crc kubenswrapper[4681]: E1123 07:00:30.800056 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="proxy-httpd" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800061 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="proxy-httpd" Nov 23 07:00:30 crc kubenswrapper[4681]: E1123 07:00:30.800093 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-notification-agent" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800100 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-notification-agent" Nov 23 07:00:30 crc kubenswrapper[4681]: E1123 07:00:30.800129 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-central-agent" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800135 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-central-agent" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800302 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="sg-core" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800310 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="proxy-httpd" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800320 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-central-agent" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.800332 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" containerName="ceilometer-notification-agent" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.804236 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.418230549 podStartE2EDuration="17.804214049s" podCreationTimestamp="2025-11-23 07:00:13 +0000 UTC" firstStartedPulling="2025-11-23 07:00:13.954891635 +0000 UTC m=+951.024400872" lastFinishedPulling="2025-11-23 07:00:29.340875134 +0000 UTC m=+966.410384372" observedRunningTime="2025-11-23 07:00:30.776847881 +0000 UTC m=+967.846357117" watchObservedRunningTime="2025-11-23 07:00:30.804214049 +0000 UTC m=+967.873723285" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.805033 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.808949 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.809164 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.817309 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.859021 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" podStartSLOduration=3.859003292 podStartE2EDuration="3.859003292s" podCreationTimestamp="2025-11-23 07:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:30.826630485 +0000 UTC m=+967.896139722" watchObservedRunningTime="2025-11-23 07:00:30.859003292 +0000 UTC m=+967.928512528" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.892941 4681 scope.go:117] "RemoveContainer" containerID="3bf2a8092b8fd5f071dfa7a4fb0bf933e8c96c280e5d059baf03813d788a2d55" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.947049 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-config-data\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.947104 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.947155 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-scripts\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.947179 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-run-httpd\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.947304 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqg6v\" (UniqueName: \"kubernetes.io/projected/2034dfe3-3cd9-4870-9005-bbcec7957ef8-kube-api-access-mqg6v\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.947339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-log-httpd\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:30 crc kubenswrapper[4681]: I1123 07:00:30.947383 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.013861 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.056280 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-config-data\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.056332 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.056374 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-scripts\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.056396 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-run-httpd\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.056498 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqg6v\" (UniqueName: \"kubernetes.io/projected/2034dfe3-3cd9-4870-9005-bbcec7957ef8-kube-api-access-mqg6v\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.056525 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-log-httpd\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.056564 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.058308 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-run-httpd\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.058968 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-log-httpd\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.071944 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-scripts\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.072074 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.072443 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.089220 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqg6v\" (UniqueName: \"kubernetes.io/projected/2034dfe3-3cd9-4870-9005-bbcec7957ef8-kube-api-access-mqg6v\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.103568 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-config-data\") pod \"ceilometer-0\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.161247 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-config-data\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.161420 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgzj5\" (UniqueName: \"kubernetes.io/projected/48ecd863-12ce-4eb3-ba76-eea730db3b2d-kube-api-access-tgzj5\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.161508 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.161561 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-combined-ca-bundle\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.165303 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-public-tls-certs\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.165404 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-logs\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.165448 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-scripts\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.165506 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-httpd-run\") pod \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\" (UID: \"48ecd863-12ce-4eb3-ba76-eea730db3b2d\") " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.167167 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.167538 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-logs" (OuterVolumeSpecName: "logs") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.170758 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ecd863-12ce-4eb3-ba76-eea730db3b2d-kube-api-access-tgzj5" (OuterVolumeSpecName: "kube-api-access-tgzj5") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "kube-api-access-tgzj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.177597 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-scripts" (OuterVolumeSpecName: "scripts") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.178377 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.186363 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.273220 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.273250 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.273259 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.273268 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48ecd863-12ce-4eb3-ba76-eea730db3b2d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.273279 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgzj5\" (UniqueName: \"kubernetes.io/projected/48ecd863-12ce-4eb3-ba76-eea730db3b2d-kube-api-access-tgzj5\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.304607 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.307659 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3267d91e-9fae-46d3-ba7e-4d06fbe83e00" path="/var/lib/kubelet/pods/3267d91e-9fae-46d3-ba7e-4d06fbe83e00/volumes" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.347793 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.377586 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.377778 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.391695 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.405177 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-config-data" (OuterVolumeSpecName: "config-data") pod "48ecd863-12ce-4eb3-ba76-eea730db3b2d" (UID: "48ecd863-12ce-4eb3-ba76-eea730db3b2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.491851 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.492079 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ecd863-12ce-4eb3-ba76-eea730db3b2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.793200 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48ecd863-12ce-4eb3-ba76-eea730db3b2d","Type":"ContainerDied","Data":"f2f9f7b07d980f839e3bfc610594a28559a73467cc05f9662ef089aa066821df"} Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.793273 4681 scope.go:117] "RemoveContainer" containerID="884e2a56b0230e733fad802ba25fca0312606baab59fc36a9a13c7175936d99a" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.793284 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.802833 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cfb689747-vscpn" event={"ID":"94898d17-ee5b-4035-aff2-db846fcfa5f7","Type":"ContainerStarted","Data":"380b6d44c65b94cb8300f25cdc2b9d551d69e23126a4faa107f6a923df8c4287"} Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.803569 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.847120 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cfb689747-vscpn" podStartSLOduration=4.84709979 podStartE2EDuration="4.84709979s" podCreationTimestamp="2025-11-23 07:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:31.827776611 +0000 UTC m=+968.897285848" watchObservedRunningTime="2025-11-23 07:00:31.84709979 +0000 UTC m=+968.916609026" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.876506 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.889425 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.906907 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:00:31 crc kubenswrapper[4681]: E1123 07:00:31.907309 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-httpd" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.907326 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-httpd" Nov 23 07:00:31 crc kubenswrapper[4681]: E1123 07:00:31.907345 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-log" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.907352 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-log" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.907553 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-log" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.907565 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" containerName="glance-httpd" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.912741 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.915681 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.915857 4681 scope.go:117] "RemoveContainer" containerID="97cf1e2c2dc5490b7dccaeb1542e6282ce33d29ac43281d21569cfed720f97eb" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.928366 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.935899 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:00:31 crc kubenswrapper[4681]: I1123 07:00:31.961526 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003570 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003616 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf27r\" (UniqueName: \"kubernetes.io/projected/b820fbbf-6154-401d-b84d-b02c0f9a5050-kube-api-access-gf27r\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003770 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003849 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b820fbbf-6154-401d-b84d-b02c0f9a5050-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003882 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-config-data\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003912 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-scripts\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003931 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.003956 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b820fbbf-6154-401d-b84d-b02c0f9a5050-logs\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.027718 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.161:9292/healthcheck\": read tcp 10.217.0.2:48160->10.217.0.161:9292: read: connection reset by peer" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.027920 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.161:9292/healthcheck\": read tcp 10.217.0.2:48168->10.217.0.161:9292: read: connection reset by peer" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.106810 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-config-data\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.106881 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-scripts\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.106899 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.106919 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b820fbbf-6154-401d-b84d-b02c0f9a5050-logs\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.107011 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.107049 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf27r\" (UniqueName: \"kubernetes.io/projected/b820fbbf-6154-401d-b84d-b02c0f9a5050-kube-api-access-gf27r\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.107133 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.107170 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b820fbbf-6154-401d-b84d-b02c0f9a5050-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.107791 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b820fbbf-6154-401d-b84d-b02c0f9a5050-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.108387 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b820fbbf-6154-401d-b84d-b02c0f9a5050-logs\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.109132 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.114146 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-scripts\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.120287 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.120329 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.122611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b820fbbf-6154-401d-b84d-b02c0f9a5050-config-data\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.129249 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf27r\" (UniqueName: \"kubernetes.io/projected/b820fbbf-6154-401d-b84d-b02c0f9a5050-kube-api-access-gf27r\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.158326 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b820fbbf-6154-401d-b84d-b02c0f9a5050\") " pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.254638 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: E1123 07:00:32.301947 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdfa433c_2b77_4373_877f_5c92a2b39fb8.slice/crio-conmon-f940cdcb178170ebf29c7591f70bc1b658fd92fed2c294459eb2f16f26d69ceb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ecd863_12ce_4eb3_ba76_eea730db3b2d.slice/crio-f2f9f7b07d980f839e3bfc610594a28559a73467cc05f9662ef089aa066821df\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3267d91e_9fae_46d3_ba7e_4d06fbe83e00.slice/crio-c4ffe6862f7bf4a72d0c51e38c9e9167cf62ae7abeaa3a2a5d528b128bbebb1b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ecd863_12ce_4eb3_ba76_eea730db3b2d.slice/crio-884e2a56b0230e733fad802ba25fca0312606baab59fc36a9a13c7175936d99a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ecd863_12ce_4eb3_ba76_eea730db3b2d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ecd863_12ce_4eb3_ba76_eea730db3b2d.slice/crio-conmon-884e2a56b0230e733fad802ba25fca0312606baab59fc36a9a13c7175936d99a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdfa433c_2b77_4373_877f_5c92a2b39fb8.slice/crio-f940cdcb178170ebf29c7591f70bc1b658fd92fed2c294459eb2f16f26d69ceb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d296ac2_d00b_4a99_94e2_78004337f7e2.slice/crio-conmon-09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d296ac2_d00b_4a99_94e2_78004337f7e2.slice/crio-282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.592738 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.772614 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-combined-ca-bundle\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.773108 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-config-data\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.773166 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-internal-tls-certs\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.773242 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-scripts\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.773280 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-logs\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.773596 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rfr7\" (UniqueName: \"kubernetes.io/projected/1d296ac2-d00b-4a99-94e2-78004337f7e2-kube-api-access-5rfr7\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.773690 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.773767 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-httpd-run\") pod \"1d296ac2-d00b-4a99-94e2-78004337f7e2\" (UID: \"1d296ac2-d00b-4a99-94e2-78004337f7e2\") " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.774846 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.778825 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-logs" (OuterVolumeSpecName: "logs") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.807096 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d296ac2-d00b-4a99-94e2-78004337f7e2-kube-api-access-5rfr7" (OuterVolumeSpecName: "kube-api-access-5rfr7") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "kube-api-access-5rfr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.807207 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.814416 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-scripts" (OuterVolumeSpecName: "scripts") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.833575 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerStarted","Data":"82f42dec0f83bf4bc1c73944dda32acd381592f2245b120be2bbf0ff595f4c9f"} Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.863966 4681 generic.go:334] "Generic (PLEG): container finished" podID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerID="282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4" exitCode=0 Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.864221 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d296ac2-d00b-4a99-94e2-78004337f7e2","Type":"ContainerDied","Data":"282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4"} Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.864260 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d296ac2-d00b-4a99-94e2-78004337f7e2","Type":"ContainerDied","Data":"9a29efadfd8cddf7ee7697bb02de14d9f0513c47276e035dd35c3fdaa60bb283"} Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.864285 4681 scope.go:117] "RemoveContainer" containerID="282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.864436 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.866391 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.876418 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.876446 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.876481 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.876491 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d296ac2-d00b-4a99-94e2-78004337f7e2-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.876500 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rfr7\" (UniqueName: \"kubernetes.io/projected/1d296ac2-d00b-4a99-94e2-78004337f7e2-kube-api-access-5rfr7\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.876550 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.915951 4681 scope.go:117] "RemoveContainer" containerID="09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.945608 4681 scope.go:117] "RemoveContainer" containerID="282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4" Nov 23 07:00:32 crc kubenswrapper[4681]: E1123 07:00:32.946064 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4\": container with ID starting with 282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4 not found: ID does not exist" containerID="282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.946112 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4"} err="failed to get container status \"282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4\": rpc error: code = NotFound desc = could not find container \"282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4\": container with ID starting with 282e8ed7d9fd4833336c33fac17adabd73756095e67863205afa215b5ecfebf4 not found: ID does not exist" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.946141 4681 scope.go:117] "RemoveContainer" containerID="09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105" Nov 23 07:00:32 crc kubenswrapper[4681]: E1123 07:00:32.949589 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105\": container with ID starting with 09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105 not found: ID does not exist" containerID="09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.949618 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105"} err="failed to get container status \"09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105\": rpc error: code = NotFound desc = could not find container \"09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105\": container with ID starting with 09407aacd212fb8fc7ed41a933132635b8d8d5bdb7f56c14c2072f241f1dd105 not found: ID does not exist" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.952637 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-config-data" (OuterVolumeSpecName: "config-data") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.956161 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.966033 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1d296ac2-d00b-4a99-94e2-78004337f7e2" (UID: "1d296ac2-d00b-4a99-94e2-78004337f7e2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.980553 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.981068 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:32 crc kubenswrapper[4681]: I1123 07:00:32.981143 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d296ac2-d00b-4a99-94e2-78004337f7e2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.094132 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.217718 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.234802 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.240399 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:00:33 crc kubenswrapper[4681]: E1123 07:00:33.247383 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-log" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.247415 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-log" Nov 23 07:00:33 crc kubenswrapper[4681]: E1123 07:00:33.247482 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-httpd" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.247490 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-httpd" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.247831 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-httpd" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.247857 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" containerName="glance-log" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.249692 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.258286 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.258701 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300365 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300437 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300472 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300511 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300560 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-logs\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300588 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300611 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.300632 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgk4d\" (UniqueName: \"kubernetes.io/projected/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-kube-api-access-dgk4d\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.314296 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d296ac2-d00b-4a99-94e2-78004337f7e2" path="/var/lib/kubelet/pods/1d296ac2-d00b-4a99-94e2-78004337f7e2/volumes" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.315122 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ecd863-12ce-4eb3-ba76-eea730db3b2d" path="/var/lib/kubelet/pods/48ecd863-12ce-4eb3-ba76-eea730db3b2d/volumes" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.315799 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.401884 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.402028 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-logs\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.402061 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.402095 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.402115 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgk4d\" (UniqueName: \"kubernetes.io/projected/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-kube-api-access-dgk4d\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.402179 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.402293 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.402331 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.403287 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.403526 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.404235 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-logs\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.420516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.421237 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.421650 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.422986 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgk4d\" (UniqueName: \"kubernetes.io/projected/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-kube-api-access-dgk4d\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.443379 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.488232 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.594679 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.887662 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b820fbbf-6154-401d-b84d-b02c0f9a5050","Type":"ContainerStarted","Data":"6b6073026ac148352ce57fea81b2db120e52f43655d6fb5e4c57d7e104595627"} Nov 23 07:00:33 crc kubenswrapper[4681]: I1123 07:00:33.895276 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerStarted","Data":"c663dd08954c2bbbbcc5887d524ce391910566e33b9ac8ee49c512ab7235d784"} Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.819119 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6ddc7dd66-g2jvq"] Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.820957 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.830327 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-config-data-custom\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.830402 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pthf\" (UniqueName: \"kubernetes.io/projected/131a345a-32a8-424e-a2ad-efbe5e9395e4-kube-api-access-6pthf\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.835077 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6ddc7dd66-g2jvq"] Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.835530 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-combined-ca-bundle\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.835586 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-config-data\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.860258 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-76bdd6c54d-pgs2k"] Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.863060 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.884355 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-58c458694f-hz9v7"] Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.885904 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.923218 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58c458694f-hz9v7"] Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940539 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxbs\" (UniqueName: \"kubernetes.io/projected/e13a2ce8-368c-4e82-a354-dcc661a48644-kube-api-access-ztxbs\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940645 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-combined-ca-bundle\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940709 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7nd\" (UniqueName: \"kubernetes.io/projected/e37fed9f-f942-4518-857d-86c5b10f1bb5-kube-api-access-fj7nd\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940737 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-config-data\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940806 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940829 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940863 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-config-data-custom\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940926 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-combined-ca-bundle\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.940963 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data-custom\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.941026 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pthf\" (UniqueName: \"kubernetes.io/projected/131a345a-32a8-424e-a2ad-efbe5e9395e4-kube-api-access-6pthf\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.941054 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-combined-ca-bundle\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.941099 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data-custom\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.946913 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-combined-ca-bundle\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.963107 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-config-data-custom\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.973996 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pthf\" (UniqueName: \"kubernetes.io/projected/131a345a-32a8-424e-a2ad-efbe5e9395e4-kube-api-access-6pthf\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:34 crc kubenswrapper[4681]: I1123 07:00:34.984593 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131a345a-32a8-424e-a2ad-efbe5e9395e4-config-data\") pod \"heat-engine-6ddc7dd66-g2jvq\" (UID: \"131a345a-32a8-424e-a2ad-efbe5e9395e4\") " pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.012018 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-76bdd6c54d-pgs2k"] Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043514 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043556 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043601 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-combined-ca-bundle\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data-custom\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043659 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-combined-ca-bundle\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043685 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data-custom\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043709 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxbs\" (UniqueName: \"kubernetes.io/projected/e13a2ce8-368c-4e82-a354-dcc661a48644-kube-api-access-ztxbs\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.043760 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj7nd\" (UniqueName: \"kubernetes.io/projected/e37fed9f-f942-4518-857d-86c5b10f1bb5-kube-api-access-fj7nd\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.067948 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data-custom\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.070383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.077089 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-combined-ca-bundle\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.077825 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-combined-ca-bundle\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.108158 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj7nd\" (UniqueName: \"kubernetes.io/projected/e37fed9f-f942-4518-857d-86c5b10f1bb5-kube-api-access-fj7nd\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.108420 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxbs\" (UniqueName: \"kubernetes.io/projected/e13a2ce8-368c-4e82-a354-dcc661a48644-kube-api-access-ztxbs\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.110304 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data\") pod \"heat-cfnapi-58c458694f-hz9v7\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.110339 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data-custom\") pod \"heat-api-76bdd6c54d-pgs2k\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.226635 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.268499 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.306509 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.325682 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:35 crc kubenswrapper[4681]: I1123 07:00:35.809221 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6ddc7dd66-g2jvq"] Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.028633 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8569478495-vj5pz" event={"ID":"87f9cbe6-025e-4880-9c22-f3f0c8373284","Type":"ContainerStarted","Data":"03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f"} Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.029803 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.074033 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-8569478495-vj5pz" podStartSLOduration=4.704756346 podStartE2EDuration="9.073995015s" podCreationTimestamp="2025-11-23 07:00:27 +0000 UTC" firstStartedPulling="2025-11-23 07:00:30.21072064 +0000 UTC m=+967.280229876" lastFinishedPulling="2025-11-23 07:00:34.579959308 +0000 UTC m=+971.649468545" observedRunningTime="2025-11-23 07:00:36.053940016 +0000 UTC m=+973.123449253" watchObservedRunningTime="2025-11-23 07:00:36.073995015 +0000 UTC m=+973.143504251" Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.080002 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b820fbbf-6154-401d-b84d-b02c0f9a5050","Type":"ContainerStarted","Data":"69ca2f7ad7dfb108677bd9d068dfec96c17c4cf8783bc217fbc5b0564133c42f"} Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.089710 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6ddc7dd66-g2jvq" event={"ID":"131a345a-32a8-424e-a2ad-efbe5e9395e4","Type":"ContainerStarted","Data":"29dfa27e453ef4da0875e74240db8ec6b44ff054c79e01da0ceac28490d4d81d"} Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.095044 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerStarted","Data":"1340eeefd38b376ad08ebf009ced54116637bac7b3a23dab674361baa88367dc"} Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.099277 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974","Type":"ContainerStarted","Data":"1a71228e30fb6aa2ec49242a4fe6ea4611faf67fabd05c4564319d6523f1f697"} Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.116812 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-76bdd6c54d-pgs2k"] Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.118367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5569487b4-2rc76" event={"ID":"2475c700-0817-4d27-9e05-0b04cf845474","Type":"ContainerStarted","Data":"9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824"} Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.119636 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.240243 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5569487b4-2rc76" podStartSLOduration=3.870691594 podStartE2EDuration="8.240223896s" podCreationTimestamp="2025-11-23 07:00:28 +0000 UTC" firstStartedPulling="2025-11-23 07:00:30.227709462 +0000 UTC m=+967.297218699" lastFinishedPulling="2025-11-23 07:00:34.597241764 +0000 UTC m=+971.666751001" observedRunningTime="2025-11-23 07:00:36.141189871 +0000 UTC m=+973.210699107" watchObservedRunningTime="2025-11-23 07:00:36.240223896 +0000 UTC m=+973.309733133" Nov 23 07:00:36 crc kubenswrapper[4681]: I1123 07:00:36.244649 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58c458694f-hz9v7"] Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.132719 4681 generic.go:334] "Generic (PLEG): container finished" podID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerID="8919c142faa724ce2f51bbf505301dd6ff8997b2e04b788f5ac53d51d06fa083" exitCode=1 Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.133023 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76bdd6c54d-pgs2k" event={"ID":"e13a2ce8-368c-4e82-a354-dcc661a48644","Type":"ContainerDied","Data":"8919c142faa724ce2f51bbf505301dd6ff8997b2e04b788f5ac53d51d06fa083"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.133054 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76bdd6c54d-pgs2k" event={"ID":"e13a2ce8-368c-4e82-a354-dcc661a48644","Type":"ContainerStarted","Data":"3bf4db9702d0bf1f889db2fa3f302839df8fe34efe44264eb5d27bc2a1f3d279"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.133425 4681 scope.go:117] "RemoveContainer" containerID="8919c142faa724ce2f51bbf505301dd6ff8997b2e04b788f5ac53d51d06fa083" Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.135958 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b820fbbf-6154-401d-b84d-b02c0f9a5050","Type":"ContainerStarted","Data":"2d4dd8a7f19784571c7ae8e30497c24d599f77921f355d4dc8ff6df8058aac17"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.137957 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6ddc7dd66-g2jvq" event={"ID":"131a345a-32a8-424e-a2ad-efbe5e9395e4","Type":"ContainerStarted","Data":"17bbcbf077f36a50ff30e10bffef7a7a25a8e0921877684b377c5ad18afcb397"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.138352 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.146790 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerStarted","Data":"565b3a9135beaf641f4c84bd13cbf06e2edd242619056ddcd83633b682a8c100"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.167294 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974","Type":"ContainerStarted","Data":"322e3fa46553aa49810d4b86bd72a237d6473c61f3e51ea5b1cfdca52364ec0c"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.201519 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6ddc7dd66-g2jvq" podStartSLOduration=3.201498293 podStartE2EDuration="3.201498293s" podCreationTimestamp="2025-11-23 07:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:37.196853176 +0000 UTC m=+974.266362413" watchObservedRunningTime="2025-11-23 07:00:37.201498293 +0000 UTC m=+974.271007529" Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.213909 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58c458694f-hz9v7" event={"ID":"e37fed9f-f942-4518-857d-86c5b10f1bb5","Type":"ContainerStarted","Data":"e3424734d5944eb9de4acbf60bbe54603a873ea22f8cbbc86234cc5e8c6cbf44"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.213948 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.213959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58c458694f-hz9v7" event={"ID":"e37fed9f-f942-4518-857d-86c5b10f1bb5","Type":"ContainerStarted","Data":"e898a676a3726b10aba5f50b4256a88fe21b33885efa6d960ab110eb3806f46a"} Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.241228 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.241212765 podStartE2EDuration="6.241212765s" podCreationTimestamp="2025-11-23 07:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:37.235426725 +0000 UTC m=+974.304935962" watchObservedRunningTime="2025-11-23 07:00:37.241212765 +0000 UTC m=+974.310722002" Nov 23 07:00:37 crc kubenswrapper[4681]: I1123 07:00:37.278624 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-58c458694f-hz9v7" podStartSLOduration=3.278599664 podStartE2EDuration="3.278599664s" podCreationTimestamp="2025-11-23 07:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:37.263343141 +0000 UTC m=+974.332852379" watchObservedRunningTime="2025-11-23 07:00:37.278599664 +0000 UTC m=+974.348108921" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.224419 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerStarted","Data":"ba97c75280d3672888ac3fdd872aabf7e49aa4c96d6a53bd41709b2d0b067f2f"} Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.224964 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.227169 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdcb1a4-9fb6-4fb8-bbb5-a76f07e00974","Type":"ContainerStarted","Data":"247dc0dc1718b3a69fc0131545064cd29b5f644b700198ff669a8d968f691d62"} Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.229143 4681 generic.go:334] "Generic (PLEG): container finished" podID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerID="e3424734d5944eb9de4acbf60bbe54603a873ea22f8cbbc86234cc5e8c6cbf44" exitCode=1 Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.229182 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58c458694f-hz9v7" event={"ID":"e37fed9f-f942-4518-857d-86c5b10f1bb5","Type":"ContainerDied","Data":"e3424734d5944eb9de4acbf60bbe54603a873ea22f8cbbc86234cc5e8c6cbf44"} Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.229569 4681 scope.go:117] "RemoveContainer" containerID="e3424734d5944eb9de4acbf60bbe54603a873ea22f8cbbc86234cc5e8c6cbf44" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.231933 4681 generic.go:334] "Generic (PLEG): container finished" podID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerID="c34ec97b25863b10fdbc49880f1c3a211b4d0f568ac3d8811ec1a4e5c6db8faf" exitCode=1 Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.232068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76bdd6c54d-pgs2k" event={"ID":"e13a2ce8-368c-4e82-a354-dcc661a48644","Type":"ContainerDied","Data":"c34ec97b25863b10fdbc49880f1c3a211b4d0f568ac3d8811ec1a4e5c6db8faf"} Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.232115 4681 scope.go:117] "RemoveContainer" containerID="8919c142faa724ce2f51bbf505301dd6ff8997b2e04b788f5ac53d51d06fa083" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.232905 4681 scope.go:117] "RemoveContainer" containerID="c34ec97b25863b10fdbc49880f1c3a211b4d0f568ac3d8811ec1a4e5c6db8faf" Nov 23 07:00:38 crc kubenswrapper[4681]: E1123 07:00:38.233193 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-76bdd6c54d-pgs2k_openstack(e13a2ce8-368c-4e82-a354-dcc661a48644)\"" pod="openstack/heat-api-76bdd6c54d-pgs2k" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.266447 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.689908731 podStartE2EDuration="8.266433755s" podCreationTimestamp="2025-11-23 07:00:30 +0000 UTC" firstStartedPulling="2025-11-23 07:00:31.981667677 +0000 UTC m=+969.051176914" lastFinishedPulling="2025-11-23 07:00:37.558192701 +0000 UTC m=+974.627701938" observedRunningTime="2025-11-23 07:00:38.258933496 +0000 UTC m=+975.328442722" watchObservedRunningTime="2025-11-23 07:00:38.266433755 +0000 UTC m=+975.335942992" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.300044 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.300027096 podStartE2EDuration="5.300027096s" podCreationTimestamp="2025-11-23 07:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:38.293267402 +0000 UTC m=+975.362776639" watchObservedRunningTime="2025-11-23 07:00:38.300027096 +0000 UTC m=+975.369536334" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.513747 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.589994 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64cc7f6975-jn6mr"] Nov 23 07:00:38 crc kubenswrapper[4681]: I1123 07:00:38.590487 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" podUID="529f52d4-35e7-4121-899e-0e94d628f72c" containerName="dnsmasq-dns" containerID="cri-o://71e11d33c28ac955113b26e7578c58e25494627cc10819703a1770c3464f7273" gracePeriod=10 Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.104151 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.104198 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.119105 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fcdb4576d-g8stp" podUID="bdfa433c-2b77-4373-877f-5c92a2b39fb8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.257239 4681 generic.go:334] "Generic (PLEG): container finished" podID="529f52d4-35e7-4121-899e-0e94d628f72c" containerID="71e11d33c28ac955113b26e7578c58e25494627cc10819703a1770c3464f7273" exitCode=0 Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.261900 4681 scope.go:117] "RemoveContainer" containerID="c34ec97b25863b10fdbc49880f1c3a211b4d0f568ac3d8811ec1a4e5c6db8faf" Nov 23 07:00:39 crc kubenswrapper[4681]: E1123 07:00:39.262110 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-76bdd6c54d-pgs2k_openstack(e13a2ce8-368c-4e82-a354-dcc661a48644)\"" pod="openstack/heat-api-76bdd6c54d-pgs2k" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.279667 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" event={"ID":"529f52d4-35e7-4121-899e-0e94d628f72c","Type":"ContainerDied","Data":"71e11d33c28ac955113b26e7578c58e25494627cc10819703a1770c3464f7273"} Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.322952 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.384110 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-swift-storage-0\") pod \"529f52d4-35e7-4121-899e-0e94d628f72c\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.384168 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-config\") pod \"529f52d4-35e7-4121-899e-0e94d628f72c\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.384333 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-svc\") pod \"529f52d4-35e7-4121-899e-0e94d628f72c\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.384475 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-sb\") pod \"529f52d4-35e7-4121-899e-0e94d628f72c\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.384520 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfdsf\" (UniqueName: \"kubernetes.io/projected/529f52d4-35e7-4121-899e-0e94d628f72c-kube-api-access-rfdsf\") pod \"529f52d4-35e7-4121-899e-0e94d628f72c\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.384561 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-nb\") pod \"529f52d4-35e7-4121-899e-0e94d628f72c\" (UID: \"529f52d4-35e7-4121-899e-0e94d628f72c\") " Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.448669 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529f52d4-35e7-4121-899e-0e94d628f72c-kube-api-access-rfdsf" (OuterVolumeSpecName: "kube-api-access-rfdsf") pod "529f52d4-35e7-4121-899e-0e94d628f72c" (UID: "529f52d4-35e7-4121-899e-0e94d628f72c"). InnerVolumeSpecName "kube-api-access-rfdsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.450131 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5569487b4-2rc76"] Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.450366 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5569487b4-2rc76" podUID="2475c700-0817-4d27-9e05-0b04cf845474" containerName="heat-api" containerID="cri-o://9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824" gracePeriod=60 Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.478453 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8569478495-vj5pz"] Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.478768 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-8569478495-vj5pz" podUID="87f9cbe6-025e-4880-9c22-f3f0c8373284" containerName="heat-cfnapi" containerID="cri-o://03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f" gracePeriod=60 Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.487272 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfdsf\" (UniqueName: \"kubernetes.io/projected/529f52d4-35e7-4121-899e-0e94d628f72c-kube-api-access-rfdsf\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.528802 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-config" (OuterVolumeSpecName: "config") pod "529f52d4-35e7-4121-899e-0e94d628f72c" (UID: "529f52d4-35e7-4121-899e-0e94d628f72c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.533530 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-854ccc9f67-s9fwn"] Nov 23 07:00:39 crc kubenswrapper[4681]: E1123 07:00:39.534057 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529f52d4-35e7-4121-899e-0e94d628f72c" containerName="dnsmasq-dns" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.534077 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="529f52d4-35e7-4121-899e-0e94d628f72c" containerName="dnsmasq-dns" Nov 23 07:00:39 crc kubenswrapper[4681]: E1123 07:00:39.534105 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529f52d4-35e7-4121-899e-0e94d628f72c" containerName="init" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.534111 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="529f52d4-35e7-4121-899e-0e94d628f72c" containerName="init" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.534304 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="529f52d4-35e7-4121-899e-0e94d628f72c" containerName="dnsmasq-dns" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.534995 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.539645 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.539880 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.578529 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-79cf84dc47-t6rxl"] Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.588746 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.591355 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-config-data\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.591405 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-config-data-custom\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.591432 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-public-tls-certs\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.591522 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-internal-tls-certs\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.591545 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8fk\" (UniqueName: \"kubernetes.io/projected/ee50f839-23d6-474a-be91-f16fb836a2af-kube-api-access-gn8fk\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.591609 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-combined-ca-bundle\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.591673 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.593375 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.593425 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "529f52d4-35e7-4121-899e-0e94d628f72c" (UID: "529f52d4-35e7-4121-899e-0e94d628f72c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.593579 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.610888 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79cf84dc47-t6rxl"] Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.612443 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "529f52d4-35e7-4121-899e-0e94d628f72c" (UID: "529f52d4-35e7-4121-899e-0e94d628f72c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.622887 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "529f52d4-35e7-4121-899e-0e94d628f72c" (UID: "529f52d4-35e7-4121-899e-0e94d628f72c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.641242 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "529f52d4-35e7-4121-899e-0e94d628f72c" (UID: "529f52d4-35e7-4121-899e-0e94d628f72c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.641398 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-854ccc9f67-s9fwn"] Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.703929 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-combined-ca-bundle\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704152 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-internal-tls-certs\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704190 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8fk\" (UniqueName: \"kubernetes.io/projected/ee50f839-23d6-474a-be91-f16fb836a2af-kube-api-access-gn8fk\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704223 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-combined-ca-bundle\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704261 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-config-data-custom\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704290 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-config-data\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704349 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-internal-tls-certs\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704402 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-public-tls-certs\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704442 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ms8\" (UniqueName: \"kubernetes.io/projected/8c24b6c5-af87-4308-ab4a-d79b901a69b1-kube-api-access-47ms8\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.704544 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-config-data\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.705480 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-config-data-custom\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.705533 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-public-tls-certs\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.724984 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.725007 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.725023 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.725035 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f52d4-35e7-4121-899e-0e94d628f72c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.732622 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-config-data-custom\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.736388 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-internal-tls-certs\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.736483 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-config-data\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.737823 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-combined-ca-bundle\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.738790 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee50f839-23d6-474a-be91-f16fb836a2af-public-tls-certs\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.759657 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8fk\" (UniqueName: \"kubernetes.io/projected/ee50f839-23d6-474a-be91-f16fb836a2af-kube-api-access-gn8fk\") pod \"heat-cfnapi-854ccc9f67-s9fwn\" (UID: \"ee50f839-23d6-474a-be91-f16fb836a2af\") " pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.827299 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-public-tls-certs\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.827360 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47ms8\" (UniqueName: \"kubernetes.io/projected/8c24b6c5-af87-4308-ab4a-d79b901a69b1-kube-api-access-47ms8\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.827484 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-combined-ca-bundle\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.827575 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-config-data-custom\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.827608 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-config-data\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.827641 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-internal-tls-certs\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.832591 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-combined-ca-bundle\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.832982 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-config-data-custom\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.836170 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-public-tls-certs\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.836684 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-config-data\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.836820 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c24b6c5-af87-4308-ab4a-d79b901a69b1-internal-tls-certs\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.847054 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47ms8\" (UniqueName: \"kubernetes.io/projected/8c24b6c5-af87-4308-ab4a-d79b901a69b1-kube-api-access-47ms8\") pod \"heat-api-79cf84dc47-t6rxl\" (UID: \"8c24b6c5-af87-4308-ab4a-d79b901a69b1\") " pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.880227 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:39 crc kubenswrapper[4681]: I1123 07:00:39.913917 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.239162 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5569487b4-2rc76" podUID="2475c700-0817-4d27-9e05-0b04cf845474" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.181:8004/healthcheck\": read tcp 10.217.0.2:52118->10.217.0.181:8004: read: connection reset by peer" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.259976 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.293980 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.293450 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc7f6975-jn6mr" event={"ID":"529f52d4-35e7-4121-899e-0e94d628f72c","Type":"ContainerDied","Data":"8dd6d6bdad85e78075df5cf7f32b878bc7c52aa93c714e6f50a549567c918a99"} Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.297689 4681 scope.go:117] "RemoveContainer" containerID="71e11d33c28ac955113b26e7578c58e25494627cc10819703a1770c3464f7273" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.308955 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.309013 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.327878 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.329721 4681 generic.go:334] "Generic (PLEG): container finished" podID="87f9cbe6-025e-4880-9c22-f3f0c8373284" containerID="03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f" exitCode=0 Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.329798 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8569478495-vj5pz" event={"ID":"87f9cbe6-025e-4880-9c22-f3f0c8373284","Type":"ContainerDied","Data":"03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f"} Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.329830 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8569478495-vj5pz" event={"ID":"87f9cbe6-025e-4880-9c22-f3f0c8373284","Type":"ContainerDied","Data":"244986082db3e8e83dd7652c715cb1177edc81b07a0ef0d76e2101a9128e78f6"} Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.329863 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8569478495-vj5pz" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.341048 4681 generic.go:334] "Generic (PLEG): container finished" podID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerID="86749a8ef77b13356a38742661658ba86ea4df7ae380034de256760f04bbf0d6" exitCode=1 Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.341334 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58c458694f-hz9v7" event={"ID":"e37fed9f-f942-4518-857d-86c5b10f1bb5","Type":"ContainerDied","Data":"86749a8ef77b13356a38742661658ba86ea4df7ae380034de256760f04bbf0d6"} Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.346659 4681 scope.go:117] "RemoveContainer" containerID="86749a8ef77b13356a38742661658ba86ea4df7ae380034de256760f04bbf0d6" Nov 23 07:00:40 crc kubenswrapper[4681]: E1123 07:00:40.346899 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-58c458694f-hz9v7_openstack(e37fed9f-f942-4518-857d-86c5b10f1bb5)\"" pod="openstack/heat-cfnapi-58c458694f-hz9v7" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.349986 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data-custom\") pod \"87f9cbe6-025e-4880-9c22-f3f0c8373284\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.350119 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ct66\" (UniqueName: \"kubernetes.io/projected/87f9cbe6-025e-4880-9c22-f3f0c8373284-kube-api-access-7ct66\") pod \"87f9cbe6-025e-4880-9c22-f3f0c8373284\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.350166 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-combined-ca-bundle\") pod \"87f9cbe6-025e-4880-9c22-f3f0c8373284\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.350190 4681 scope.go:117] "RemoveContainer" containerID="c34ec97b25863b10fdbc49880f1c3a211b4d0f568ac3d8811ec1a4e5c6db8faf" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.350225 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data\") pod \"87f9cbe6-025e-4880-9c22-f3f0c8373284\" (UID: \"87f9cbe6-025e-4880-9c22-f3f0c8373284\") " Nov 23 07:00:40 crc kubenswrapper[4681]: E1123 07:00:40.350363 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-76bdd6c54d-pgs2k_openstack(e13a2ce8-368c-4e82-a354-dcc661a48644)\"" pod="openstack/heat-api-76bdd6c54d-pgs2k" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.356741 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f9cbe6-025e-4880-9c22-f3f0c8373284-kube-api-access-7ct66" (OuterVolumeSpecName: "kube-api-access-7ct66") pod "87f9cbe6-025e-4880-9c22-f3f0c8373284" (UID: "87f9cbe6-025e-4880-9c22-f3f0c8373284"). InnerVolumeSpecName "kube-api-access-7ct66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.360901 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87f9cbe6-025e-4880-9c22-f3f0c8373284" (UID: "87f9cbe6-025e-4880-9c22-f3f0c8373284"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.373600 4681 scope.go:117] "RemoveContainer" containerID="b93055c7a819e65f485bc23a27ee6221af485e31686dca1307fc79f19046ed48" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.377744 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64cc7f6975-jn6mr"] Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.411145 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64cc7f6975-jn6mr"] Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.433956 4681 scope.go:117] "RemoveContainer" containerID="03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.434863 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87f9cbe6-025e-4880-9c22-f3f0c8373284" (UID: "87f9cbe6-025e-4880-9c22-f3f0c8373284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.438260 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data" (OuterVolumeSpecName: "config-data") pod "87f9cbe6-025e-4880-9c22-f3f0c8373284" (UID: "87f9cbe6-025e-4880-9c22-f3f0c8373284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.455106 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.455199 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ct66\" (UniqueName: \"kubernetes.io/projected/87f9cbe6-025e-4880-9c22-f3f0c8373284-kube-api-access-7ct66\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.455262 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.455312 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f9cbe6-025e-4880-9c22-f3f0c8373284-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.483123 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79cf84dc47-t6rxl"] Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.513697 4681 scope.go:117] "RemoveContainer" containerID="03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f" Nov 23 07:00:40 crc kubenswrapper[4681]: E1123 07:00:40.514373 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f\": container with ID starting with 03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f not found: ID does not exist" containerID="03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.514416 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f"} err="failed to get container status \"03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f\": rpc error: code = NotFound desc = could not find container \"03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f\": container with ID starting with 03be870403114e263c2e29bfea67cc114f39a4fd1dca5a93ce919267c08be15f not found: ID does not exist" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.514445 4681 scope.go:117] "RemoveContainer" containerID="e3424734d5944eb9de4acbf60bbe54603a873ea22f8cbbc86234cc5e8c6cbf44" Nov 23 07:00:40 crc kubenswrapper[4681]: W1123 07:00:40.524950 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c24b6c5_af87_4308_ab4a_d79b901a69b1.slice/crio-d0862f713716ead98ca310f81d33874c538f218e770050bbbfdbb39ba13262c1 WatchSource:0}: Error finding container d0862f713716ead98ca310f81d33874c538f218e770050bbbfdbb39ba13262c1: Status 404 returned error can't find the container with id d0862f713716ead98ca310f81d33874c538f218e770050bbbfdbb39ba13262c1 Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.614472 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-854ccc9f67-s9fwn"] Nov 23 07:00:40 crc kubenswrapper[4681]: W1123 07:00:40.629991 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee50f839_23d6_474a_be91_f16fb836a2af.slice/crio-7505a9fb4f117ed7e0fa61a6a5c684fa6cdb835e27685bba9ac66fa00bb5d2b7 WatchSource:0}: Error finding container 7505a9fb4f117ed7e0fa61a6a5c684fa6cdb835e27685bba9ac66fa00bb5d2b7: Status 404 returned error can't find the container with id 7505a9fb4f117ed7e0fa61a6a5c684fa6cdb835e27685bba9ac66fa00bb5d2b7 Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.693789 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8569478495-vj5pz"] Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.712763 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-8569478495-vj5pz"] Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.719521 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.763791 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data\") pod \"2475c700-0817-4d27-9e05-0b04cf845474\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.763861 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data-custom\") pod \"2475c700-0817-4d27-9e05-0b04cf845474\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.763897 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg9p2\" (UniqueName: \"kubernetes.io/projected/2475c700-0817-4d27-9e05-0b04cf845474-kube-api-access-pg9p2\") pod \"2475c700-0817-4d27-9e05-0b04cf845474\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.764048 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-combined-ca-bundle\") pod \"2475c700-0817-4d27-9e05-0b04cf845474\" (UID: \"2475c700-0817-4d27-9e05-0b04cf845474\") " Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.779655 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2475c700-0817-4d27-9e05-0b04cf845474-kube-api-access-pg9p2" (OuterVolumeSpecName: "kube-api-access-pg9p2") pod "2475c700-0817-4d27-9e05-0b04cf845474" (UID: "2475c700-0817-4d27-9e05-0b04cf845474"). InnerVolumeSpecName "kube-api-access-pg9p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.779760 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2475c700-0817-4d27-9e05-0b04cf845474" (UID: "2475c700-0817-4d27-9e05-0b04cf845474"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.809430 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2475c700-0817-4d27-9e05-0b04cf845474" (UID: "2475c700-0817-4d27-9e05-0b04cf845474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.859063 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data" (OuterVolumeSpecName: "config-data") pod "2475c700-0817-4d27-9e05-0b04cf845474" (UID: "2475c700-0817-4d27-9e05-0b04cf845474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.870871 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.870907 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.870921 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2475c700-0817-4d27-9e05-0b04cf845474-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:40 crc kubenswrapper[4681]: I1123 07:00:40.870930 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg9p2\" (UniqueName: \"kubernetes.io/projected/2475c700-0817-4d27-9e05-0b04cf845474-kube-api-access-pg9p2\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.265784 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529f52d4-35e7-4121-899e-0e94d628f72c" path="/var/lib/kubelet/pods/529f52d4-35e7-4121-899e-0e94d628f72c/volumes" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.266364 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f9cbe6-025e-4880-9c22-f3f0c8373284" path="/var/lib/kubelet/pods/87f9cbe6-025e-4880-9c22-f3f0c8373284/volumes" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.350599 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" event={"ID":"ee50f839-23d6-474a-be91-f16fb836a2af","Type":"ContainerStarted","Data":"e6dd644ff9d5be7c92120287ee9e5770b8d0ba93aa4ee42c37b66747a23fcb2f"} Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.350650 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" event={"ID":"ee50f839-23d6-474a-be91-f16fb836a2af","Type":"ContainerStarted","Data":"7505a9fb4f117ed7e0fa61a6a5c684fa6cdb835e27685bba9ac66fa00bb5d2b7"} Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.351745 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.353548 4681 generic.go:334] "Generic (PLEG): container finished" podID="2475c700-0817-4d27-9e05-0b04cf845474" containerID="9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824" exitCode=0 Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.353593 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5569487b4-2rc76" event={"ID":"2475c700-0817-4d27-9e05-0b04cf845474","Type":"ContainerDied","Data":"9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824"} Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.353610 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5569487b4-2rc76" event={"ID":"2475c700-0817-4d27-9e05-0b04cf845474","Type":"ContainerDied","Data":"c70d7518df1359068670dbfcb0cde4bbc0f3a9ac63f6f11fb292fb89df948072"} Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.353627 4681 scope.go:117] "RemoveContainer" containerID="9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.353685 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5569487b4-2rc76" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.358450 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79cf84dc47-t6rxl" event={"ID":"8c24b6c5-af87-4308-ab4a-d79b901a69b1","Type":"ContainerStarted","Data":"6ac5846e9ee786da7cc22a56565eb0f03d21525accabcd098c08185c7ca54caf"} Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.358488 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79cf84dc47-t6rxl" event={"ID":"8c24b6c5-af87-4308-ab4a-d79b901a69b1","Type":"ContainerStarted","Data":"d0862f713716ead98ca310f81d33874c538f218e770050bbbfdbb39ba13262c1"} Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.358881 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.361071 4681 scope.go:117] "RemoveContainer" containerID="c34ec97b25863b10fdbc49880f1c3a211b4d0f568ac3d8811ec1a4e5c6db8faf" Nov 23 07:00:41 crc kubenswrapper[4681]: E1123 07:00:41.361238 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-76bdd6c54d-pgs2k_openstack(e13a2ce8-368c-4e82-a354-dcc661a48644)\"" pod="openstack/heat-api-76bdd6c54d-pgs2k" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.361719 4681 scope.go:117] "RemoveContainer" containerID="86749a8ef77b13356a38742661658ba86ea4df7ae380034de256760f04bbf0d6" Nov 23 07:00:41 crc kubenswrapper[4681]: E1123 07:00:41.361872 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-58c458694f-hz9v7_openstack(e37fed9f-f942-4518-857d-86c5b10f1bb5)\"" pod="openstack/heat-cfnapi-58c458694f-hz9v7" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.373074 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" podStartSLOduration=2.373064995 podStartE2EDuration="2.373064995s" podCreationTimestamp="2025-11-23 07:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:41.365111881 +0000 UTC m=+978.434621118" watchObservedRunningTime="2025-11-23 07:00:41.373064995 +0000 UTC m=+978.442574232" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.393499 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5569487b4-2rc76"] Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.397839 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5569487b4-2rc76"] Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.398882 4681 scope.go:117] "RemoveContainer" containerID="9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824" Nov 23 07:00:41 crc kubenswrapper[4681]: E1123 07:00:41.399383 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824\": container with ID starting with 9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824 not found: ID does not exist" containerID="9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.399424 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824"} err="failed to get container status \"9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824\": rpc error: code = NotFound desc = could not find container \"9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824\": container with ID starting with 9cc46d1a8a577bfb83e78f34125faf6345deadc695e31ca0d8d557ebd79d1824 not found: ID does not exist" Nov 23 07:00:41 crc kubenswrapper[4681]: I1123 07:00:41.408038 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-79cf84dc47-t6rxl" podStartSLOduration=2.408029881 podStartE2EDuration="2.408029881s" podCreationTimestamp="2025-11-23 07:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:41.405999955 +0000 UTC m=+978.475509181" watchObservedRunningTime="2025-11-23 07:00:41.408029881 +0000 UTC m=+978.477539119" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.255541 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.255952 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.295822 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.295877 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.295921 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.296355 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"caed7cef552031860d421f500f9694e60cb9adcf543f62d9378ea4360e6a8866"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.296408 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://caed7cef552031860d421f500f9694e60cb9adcf543f62d9378ea4360e6a8866" gracePeriod=600 Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.299145 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.309991 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.417105 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:00:42 crc kubenswrapper[4681]: I1123 07:00:42.417154 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.270988 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2475c700-0817-4d27-9e05-0b04cf845474" path="/var/lib/kubelet/pods/2475c700-0817-4d27-9e05-0b04cf845474/volumes" Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.426042 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="caed7cef552031860d421f500f9694e60cb9adcf543f62d9378ea4360e6a8866" exitCode=0 Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.426240 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"caed7cef552031860d421f500f9694e60cb9adcf543f62d9378ea4360e6a8866"} Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.426453 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"6b6565a11ae3d1b82169df41e725361a82bf48f3c4a16c6cf3c1e136bf571ba8"} Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.426514 4681 scope.go:117] "RemoveContainer" containerID="2a5abade0c31450ea18cad45860310cd823c68e49534b39a64b21095b8821bf8" Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.595606 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.595682 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.641152 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:43 crc kubenswrapper[4681]: I1123 07:00:43.654854 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:44 crc kubenswrapper[4681]: I1123 07:00:44.442154 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:00:44 crc kubenswrapper[4681]: I1123 07:00:44.442533 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:00:44 crc kubenswrapper[4681]: I1123 07:00:44.443361 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:44 crc kubenswrapper[4681]: I1123 07:00:44.443401 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:45 crc kubenswrapper[4681]: I1123 07:00:45.326615 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:45 crc kubenswrapper[4681]: I1123 07:00:45.326669 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:45 crc kubenswrapper[4681]: I1123 07:00:45.327546 4681 scope.go:117] "RemoveContainer" containerID="86749a8ef77b13356a38742661658ba86ea4df7ae380034de256760f04bbf0d6" Nov 23 07:00:45 crc kubenswrapper[4681]: E1123 07:00:45.327930 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-58c458694f-hz9v7_openstack(e37fed9f-f942-4518-857d-86c5b10f1bb5)\"" pod="openstack/heat-cfnapi-58c458694f-hz9v7" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" Nov 23 07:00:45 crc kubenswrapper[4681]: I1123 07:00:45.381527 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:00:45 crc kubenswrapper[4681]: I1123 07:00:45.451185 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:00:45 crc kubenswrapper[4681]: I1123 07:00:45.561817 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:00:47 crc kubenswrapper[4681]: I1123 07:00:47.404965 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:47 crc kubenswrapper[4681]: I1123 07:00:47.405624 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:00:47 crc kubenswrapper[4681]: I1123 07:00:47.531869 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:00:48 crc kubenswrapper[4681]: I1123 07:00:48.227596 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:00:48 crc kubenswrapper[4681]: I1123 07:00:48.646906 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:48 crc kubenswrapper[4681]: I1123 07:00:48.648191 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-central-agent" containerID="cri-o://c663dd08954c2bbbbcc5887d524ce391910566e33b9ac8ee49c512ab7235d784" gracePeriod=30 Nov 23 07:00:48 crc kubenswrapper[4681]: I1123 07:00:48.648259 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="proxy-httpd" containerID="cri-o://ba97c75280d3672888ac3fdd872aabf7e49aa4c96d6a53bd41709b2d0b067f2f" gracePeriod=30 Nov 23 07:00:48 crc kubenswrapper[4681]: I1123 07:00:48.648335 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-notification-agent" containerID="cri-o://1340eeefd38b376ad08ebf009ced54116637bac7b3a23dab674361baa88367dc" gracePeriod=30 Nov 23 07:00:48 crc kubenswrapper[4681]: I1123 07:00:48.648279 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="sg-core" containerID="cri-o://565b3a9135beaf641f4c84bd13cbf06e2edd242619056ddcd83633b682a8c100" gracePeriod=30 Nov 23 07:00:48 crc kubenswrapper[4681]: I1123 07:00:48.662419 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.182:3000/\": EOF" Nov 23 07:00:49 crc kubenswrapper[4681]: I1123 07:00:49.103524 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fcdb4576d-g8stp" podUID="bdfa433c-2b77-4373-877f-5c92a2b39fb8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Nov 23 07:00:49 crc kubenswrapper[4681]: I1123 07:00:49.495648 4681 generic.go:334] "Generic (PLEG): container finished" podID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerID="ba97c75280d3672888ac3fdd872aabf7e49aa4c96d6a53bd41709b2d0b067f2f" exitCode=0 Nov 23 07:00:49 crc kubenswrapper[4681]: I1123 07:00:49.495960 4681 generic.go:334] "Generic (PLEG): container finished" podID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerID="565b3a9135beaf641f4c84bd13cbf06e2edd242619056ddcd83633b682a8c100" exitCode=2 Nov 23 07:00:49 crc kubenswrapper[4681]: I1123 07:00:49.495970 4681 generic.go:334] "Generic (PLEG): container finished" podID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerID="c663dd08954c2bbbbcc5887d524ce391910566e33b9ac8ee49c512ab7235d784" exitCode=0 Nov 23 07:00:49 crc kubenswrapper[4681]: I1123 07:00:49.495715 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerDied","Data":"ba97c75280d3672888ac3fdd872aabf7e49aa4c96d6a53bd41709b2d0b067f2f"} Nov 23 07:00:49 crc kubenswrapper[4681]: I1123 07:00:49.496016 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerDied","Data":"565b3a9135beaf641f4c84bd13cbf06e2edd242619056ddcd83633b682a8c100"} Nov 23 07:00:49 crc kubenswrapper[4681]: I1123 07:00:49.496035 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerDied","Data":"c663dd08954c2bbbbcc5887d524ce391910566e33b9ac8ee49c512ab7235d784"} Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.512961 4681 generic.go:334] "Generic (PLEG): container finished" podID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerID="1340eeefd38b376ad08ebf009ced54116637bac7b3a23dab674361baa88367dc" exitCode=0 Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.513129 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerDied","Data":"1340eeefd38b376ad08ebf009ced54116637bac7b3a23dab674361baa88367dc"} Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.745453 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.825224 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-log-httpd\") pod \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.825370 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-run-httpd\") pod \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.825429 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqg6v\" (UniqueName: \"kubernetes.io/projected/2034dfe3-3cd9-4870-9005-bbcec7957ef8-kube-api-access-mqg6v\") pod \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.825511 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-scripts\") pod \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.825623 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-sg-core-conf-yaml\") pod \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.825651 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-combined-ca-bundle\") pod \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.825706 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-config-data\") pod \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\" (UID: \"2034dfe3-3cd9-4870-9005-bbcec7957ef8\") " Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.826059 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2034dfe3-3cd9-4870-9005-bbcec7957ef8" (UID: "2034dfe3-3cd9-4870-9005-bbcec7957ef8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.826073 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2034dfe3-3cd9-4870-9005-bbcec7957ef8" (UID: "2034dfe3-3cd9-4870-9005-bbcec7957ef8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.826771 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.826807 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2034dfe3-3cd9-4870-9005-bbcec7957ef8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.854758 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2034dfe3-3cd9-4870-9005-bbcec7957ef8-kube-api-access-mqg6v" (OuterVolumeSpecName: "kube-api-access-mqg6v") pod "2034dfe3-3cd9-4870-9005-bbcec7957ef8" (UID: "2034dfe3-3cd9-4870-9005-bbcec7957ef8"). InnerVolumeSpecName "kube-api-access-mqg6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.872054 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-scripts" (OuterVolumeSpecName: "scripts") pod "2034dfe3-3cd9-4870-9005-bbcec7957ef8" (UID: "2034dfe3-3cd9-4870-9005-bbcec7957ef8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.886562 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2034dfe3-3cd9-4870-9005-bbcec7957ef8" (UID: "2034dfe3-3cd9-4870-9005-bbcec7957ef8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.896228 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2034dfe3-3cd9-4870-9005-bbcec7957ef8" (UID: "2034dfe3-3cd9-4870-9005-bbcec7957ef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.929973 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqg6v\" (UniqueName: \"kubernetes.io/projected/2034dfe3-3cd9-4870-9005-bbcec7957ef8-kube-api-access-mqg6v\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.930010 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.930021 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.930032 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:50 crc kubenswrapper[4681]: I1123 07:00:50.950202 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-config-data" (OuterVolumeSpecName: "config-data") pod "2034dfe3-3cd9-4870-9005-bbcec7957ef8" (UID: "2034dfe3-3cd9-4870-9005-bbcec7957ef8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.033046 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2034dfe3-3cd9-4870-9005-bbcec7957ef8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.523329 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2034dfe3-3cd9-4870-9005-bbcec7957ef8","Type":"ContainerDied","Data":"82f42dec0f83bf4bc1c73944dda32acd381592f2245b120be2bbf0ff595f4c9f"} Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.523386 4681 scope.go:117] "RemoveContainer" containerID="ba97c75280d3672888ac3fdd872aabf7e49aa4c96d6a53bd41709b2d0b067f2f" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.524565 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.566926 4681 scope.go:117] "RemoveContainer" containerID="565b3a9135beaf641f4c84bd13cbf06e2edd242619056ddcd83633b682a8c100" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.631633 4681 scope.go:117] "RemoveContainer" containerID="1340eeefd38b376ad08ebf009ced54116637bac7b3a23dab674361baa88367dc" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.663383 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.706493 4681 scope.go:117] "RemoveContainer" containerID="c663dd08954c2bbbbcc5887d524ce391910566e33b9ac8ee49c512ab7235d784" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.736728 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.748357 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:51 crc kubenswrapper[4681]: E1123 07:00:51.748850 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2475c700-0817-4d27-9e05-0b04cf845474" containerName="heat-api" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.748867 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2475c700-0817-4d27-9e05-0b04cf845474" containerName="heat-api" Nov 23 07:00:51 crc kubenswrapper[4681]: E1123 07:00:51.748881 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-central-agent" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.748887 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-central-agent" Nov 23 07:00:51 crc kubenswrapper[4681]: E1123 07:00:51.748904 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f9cbe6-025e-4880-9c22-f3f0c8373284" containerName="heat-cfnapi" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.748911 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9cbe6-025e-4880-9c22-f3f0c8373284" containerName="heat-cfnapi" Nov 23 07:00:51 crc kubenswrapper[4681]: E1123 07:00:51.748942 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="sg-core" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.748947 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="sg-core" Nov 23 07:00:51 crc kubenswrapper[4681]: E1123 07:00:51.748956 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="proxy-httpd" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.748961 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="proxy-httpd" Nov 23 07:00:51 crc kubenswrapper[4681]: E1123 07:00:51.748973 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-notification-agent" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.748978 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-notification-agent" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.749173 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="proxy-httpd" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.749187 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2475c700-0817-4d27-9e05-0b04cf845474" containerName="heat-api" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.749199 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="sg-core" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.749206 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f9cbe6-025e-4880-9c22-f3f0c8373284" containerName="heat-cfnapi" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.749212 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-notification-agent" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.749222 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" containerName="ceilometer-central-agent" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.751710 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.756542 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.756738 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.764707 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.774071 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2lvqj"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.775734 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.779797 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2lvqj"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.835506 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rnsgf"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.836960 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.858680 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rnsgf"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.869382 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-scripts\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.869434 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.869587 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.869894 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8f2\" (UniqueName: \"kubernetes.io/projected/af391774-4ff4-48c7-a0ec-e11a85d772d5-kube-api-access-zh8f2\") pod \"nova-api-db-create-2lvqj\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.869936 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgvl\" (UniqueName: \"kubernetes.io/projected/10a8d789-78e1-40d8-ae1a-af64558b8dfc-kube-api-access-llgvl\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.869968 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af391774-4ff4-48c7-a0ec-e11a85d772d5-operator-scripts\") pod \"nova-api-db-create-2lvqj\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.869993 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-log-httpd\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.870079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-run-httpd\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.870113 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-config-data\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.881991 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-79cf84dc47-t6rxl" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.940719 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a6d6-account-create-x9xqf"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.942282 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.944892 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.967440 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a6d6-account-create-x9xqf"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.974717 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.974765 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvkb\" (UniqueName: \"kubernetes.io/projected/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-kube-api-access-5pvkb\") pod \"nova-cell0-db-create-rnsgf\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.974886 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh8f2\" (UniqueName: \"kubernetes.io/projected/af391774-4ff4-48c7-a0ec-e11a85d772d5-kube-api-access-zh8f2\") pod \"nova-api-db-create-2lvqj\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.974918 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgvl\" (UniqueName: \"kubernetes.io/projected/10a8d789-78e1-40d8-ae1a-af64558b8dfc-kube-api-access-llgvl\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.974954 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-operator-scripts\") pod \"nova-cell0-db-create-rnsgf\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.974978 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af391774-4ff4-48c7-a0ec-e11a85d772d5-operator-scripts\") pod \"nova-api-db-create-2lvqj\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.975004 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-log-httpd\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.975081 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-run-httpd\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.975113 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-config-data\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.975153 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-scripts\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.975176 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.976371 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-log-httpd\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.977147 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-76bdd6c54d-pgs2k"] Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.977729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-run-httpd\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.979008 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af391774-4ff4-48c7-a0ec-e11a85d772d5-operator-scripts\") pod \"nova-api-db-create-2lvqj\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.987784 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-scripts\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.988385 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.990890 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-config-data\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:51 crc kubenswrapper[4681]: I1123 07:00:51.994014 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.031482 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgvl\" (UniqueName: \"kubernetes.io/projected/10a8d789-78e1-40d8-ae1a-af64558b8dfc-kube-api-access-llgvl\") pod \"ceilometer-0\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " pod="openstack/ceilometer-0" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.037612 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh8f2\" (UniqueName: \"kubernetes.io/projected/af391774-4ff4-48c7-a0ec-e11a85d772d5-kube-api-access-zh8f2\") pod \"nova-api-db-create-2lvqj\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.070141 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.080025 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1408c9-082d-4560-b82d-4d6b1124d6a5-operator-scripts\") pod \"nova-api-a6d6-account-create-x9xqf\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.080125 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-operator-scripts\") pod \"nova-cell0-db-create-rnsgf\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.080337 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pvkb\" (UniqueName: \"kubernetes.io/projected/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-kube-api-access-5pvkb\") pod \"nova-cell0-db-create-rnsgf\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.080405 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrjr\" (UniqueName: \"kubernetes.io/projected/7a1408c9-082d-4560-b82d-4d6b1124d6a5-kube-api-access-frrjr\") pod \"nova-api-a6d6-account-create-x9xqf\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.081452 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-operator-scripts\") pod \"nova-cell0-db-create-rnsgf\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.088522 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7hksl"] Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.090036 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.091017 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.109535 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7hksl"] Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.121809 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pvkb\" (UniqueName: \"kubernetes.io/projected/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-kube-api-access-5pvkb\") pod \"nova-cell0-db-create-rnsgf\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.159853 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.186234 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrjr\" (UniqueName: \"kubernetes.io/projected/7a1408c9-082d-4560-b82d-4d6b1124d6a5-kube-api-access-frrjr\") pod \"nova-api-a6d6-account-create-x9xqf\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.186283 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1408c9-082d-4560-b82d-4d6b1124d6a5-operator-scripts\") pod \"nova-api-a6d6-account-create-x9xqf\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.186322 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97s6\" (UniqueName: \"kubernetes.io/projected/585ba06a-f87a-4133-a144-72545525b9a7-kube-api-access-z97s6\") pod \"nova-cell1-db-create-7hksl\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.186363 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585ba06a-f87a-4133-a144-72545525b9a7-operator-scripts\") pod \"nova-cell1-db-create-7hksl\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.187296 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1408c9-082d-4560-b82d-4d6b1124d6a5-operator-scripts\") pod \"nova-api-a6d6-account-create-x9xqf\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.211751 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrjr\" (UniqueName: \"kubernetes.io/projected/7a1408c9-082d-4560-b82d-4d6b1124d6a5-kube-api-access-frrjr\") pod \"nova-api-a6d6-account-create-x9xqf\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.246711 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-679e-account-create-dr6pd"] Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.248042 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.254988 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.267983 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.268097 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-679e-account-create-dr6pd"] Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.297420 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97s6\" (UniqueName: \"kubernetes.io/projected/585ba06a-f87a-4133-a144-72545525b9a7-kube-api-access-z97s6\") pod \"nova-cell1-db-create-7hksl\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.297749 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585ba06a-f87a-4133-a144-72545525b9a7-operator-scripts\") pod \"nova-cell1-db-create-7hksl\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.298432 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585ba06a-f87a-4133-a144-72545525b9a7-operator-scripts\") pod \"nova-cell1-db-create-7hksl\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.330859 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97s6\" (UniqueName: \"kubernetes.io/projected/585ba06a-f87a-4133-a144-72545525b9a7-kube-api-access-z97s6\") pod \"nova-cell1-db-create-7hksl\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.359500 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-11a1-account-create-9jrm5"] Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.360809 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.363278 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.381319 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-11a1-account-create-9jrm5"] Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.401885 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-operator-scripts\") pod \"nova-cell0-679e-account-create-dr6pd\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.401983 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8shf\" (UniqueName: \"kubernetes.io/projected/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-kube-api-access-r8shf\") pod \"nova-cell0-679e-account-create-dr6pd\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.482720 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.509249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-operator-scripts\") pod \"nova-cell0-679e-account-create-dr6pd\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.509549 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q625h\" (UniqueName: \"kubernetes.io/projected/e3318c1e-062e-4748-b6e3-8db9ef610c97-kube-api-access-q625h\") pod \"nova-cell1-11a1-account-create-9jrm5\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.509577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8shf\" (UniqueName: \"kubernetes.io/projected/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-kube-api-access-r8shf\") pod \"nova-cell0-679e-account-create-dr6pd\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.509608 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3318c1e-062e-4748-b6e3-8db9ef610c97-operator-scripts\") pod \"nova-cell1-11a1-account-create-9jrm5\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.510328 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-operator-scripts\") pod \"nova-cell0-679e-account-create-dr6pd\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.539496 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-854ccc9f67-s9fwn" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.557767 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8shf\" (UniqueName: \"kubernetes.io/projected/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-kube-api-access-r8shf\") pod \"nova-cell0-679e-account-create-dr6pd\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.611637 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q625h\" (UniqueName: \"kubernetes.io/projected/e3318c1e-062e-4748-b6e3-8db9ef610c97-kube-api-access-q625h\") pod \"nova-cell1-11a1-account-create-9jrm5\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.611722 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3318c1e-062e-4748-b6e3-8db9ef610c97-operator-scripts\") pod \"nova-cell1-11a1-account-create-9jrm5\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.613036 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3318c1e-062e-4748-b6e3-8db9ef610c97-operator-scripts\") pod \"nova-cell1-11a1-account-create-9jrm5\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.613656 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58c458694f-hz9v7"] Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.646430 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q625h\" (UniqueName: \"kubernetes.io/projected/e3318c1e-062e-4748-b6e3-8db9ef610c97-kube-api-access-q625h\") pod \"nova-cell1-11a1-account-create-9jrm5\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.728305 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:52 crc kubenswrapper[4681]: I1123 07:00:52.736651 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.045711 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.176691 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-combined-ca-bundle\") pod \"e13a2ce8-368c-4e82-a354-dcc661a48644\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.176831 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztxbs\" (UniqueName: \"kubernetes.io/projected/e13a2ce8-368c-4e82-a354-dcc661a48644-kube-api-access-ztxbs\") pod \"e13a2ce8-368c-4e82-a354-dcc661a48644\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.176965 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data-custom\") pod \"e13a2ce8-368c-4e82-a354-dcc661a48644\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.177108 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data\") pod \"e13a2ce8-368c-4e82-a354-dcc661a48644\" (UID: \"e13a2ce8-368c-4e82-a354-dcc661a48644\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.184071 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e13a2ce8-368c-4e82-a354-dcc661a48644" (UID: "e13a2ce8-368c-4e82-a354-dcc661a48644"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.185741 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13a2ce8-368c-4e82-a354-dcc661a48644-kube-api-access-ztxbs" (OuterVolumeSpecName: "kube-api-access-ztxbs") pod "e13a2ce8-368c-4e82-a354-dcc661a48644" (UID: "e13a2ce8-368c-4e82-a354-dcc661a48644"). InnerVolumeSpecName "kube-api-access-ztxbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.191679 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.242005 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e13a2ce8-368c-4e82-a354-dcc661a48644" (UID: "e13a2ce8-368c-4e82-a354-dcc661a48644"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.261278 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data" (OuterVolumeSpecName: "config-data") pod "e13a2ce8-368c-4e82-a354-dcc661a48644" (UID: "e13a2ce8-368c-4e82-a354-dcc661a48644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.268165 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2034dfe3-3cd9-4870-9005-bbcec7957ef8" path="/var/lib/kubelet/pods/2034dfe3-3cd9-4870-9005-bbcec7957ef8/volumes" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.279942 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data-custom\") pod \"e37fed9f-f942-4518-857d-86c5b10f1bb5\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.280100 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data\") pod \"e37fed9f-f942-4518-857d-86c5b10f1bb5\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.280239 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-combined-ca-bundle\") pod \"e37fed9f-f942-4518-857d-86c5b10f1bb5\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.280265 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj7nd\" (UniqueName: \"kubernetes.io/projected/e37fed9f-f942-4518-857d-86c5b10f1bb5-kube-api-access-fj7nd\") pod \"e37fed9f-f942-4518-857d-86c5b10f1bb5\" (UID: \"e37fed9f-f942-4518-857d-86c5b10f1bb5\") " Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.280793 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.280806 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.280817 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a2ce8-368c-4e82-a354-dcc661a48644-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.280826 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztxbs\" (UniqueName: \"kubernetes.io/projected/e13a2ce8-368c-4e82-a354-dcc661a48644-kube-api-access-ztxbs\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.285068 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e37fed9f-f942-4518-857d-86c5b10f1bb5" (UID: "e37fed9f-f942-4518-857d-86c5b10f1bb5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.287820 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37fed9f-f942-4518-857d-86c5b10f1bb5-kube-api-access-fj7nd" (OuterVolumeSpecName: "kube-api-access-fj7nd") pod "e37fed9f-f942-4518-857d-86c5b10f1bb5" (UID: "e37fed9f-f942-4518-857d-86c5b10f1bb5"). InnerVolumeSpecName "kube-api-access-fj7nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.345230 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e37fed9f-f942-4518-857d-86c5b10f1bb5" (UID: "e37fed9f-f942-4518-857d-86c5b10f1bb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.383058 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.383109 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj7nd\" (UniqueName: \"kubernetes.io/projected/e37fed9f-f942-4518-857d-86c5b10f1bb5-kube-api-access-fj7nd\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.383121 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.397631 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data" (OuterVolumeSpecName: "config-data") pod "e37fed9f-f942-4518-857d-86c5b10f1bb5" (UID: "e37fed9f-f942-4518-857d-86c5b10f1bb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.449919 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rnsgf"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.449971 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.483869 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fed9f-f942-4518-857d-86c5b10f1bb5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.600763 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-11a1-account-create-9jrm5"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.617967 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76bdd6c54d-pgs2k" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.618514 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76bdd6c54d-pgs2k" event={"ID":"e13a2ce8-368c-4e82-a354-dcc661a48644","Type":"ContainerDied","Data":"3bf4db9702d0bf1f889db2fa3f302839df8fe34efe44264eb5d27bc2a1f3d279"} Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.618596 4681 scope.go:117] "RemoveContainer" containerID="c34ec97b25863b10fdbc49880f1c3a211b4d0f568ac3d8811ec1a4e5c6db8faf" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.627168 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a6d6-account-create-x9xqf"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.637581 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnsgf" event={"ID":"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a","Type":"ContainerStarted","Data":"4aa5d9f187537748e09310e6b8a577328c9bbcd9a5d12cebaca849b4df68534a"} Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.637623 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnsgf" event={"ID":"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a","Type":"ContainerStarted","Data":"b8a836facdf3efc745820a3d2043be3df9be02ced71e1eb29541421e9860add3"} Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.656887 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2lvqj"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.673854 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-679e-account-create-dr6pd"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.686279 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-rnsgf" podStartSLOduration=2.6862651189999998 podStartE2EDuration="2.686265119s" podCreationTimestamp="2025-11-23 07:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:53.66578066 +0000 UTC m=+990.735289897" watchObservedRunningTime="2025-11-23 07:00:53.686265119 +0000 UTC m=+990.755774356" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.691718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58c458694f-hz9v7" event={"ID":"e37fed9f-f942-4518-857d-86c5b10f1bb5","Type":"ContainerDied","Data":"e898a676a3726b10aba5f50b4256a88fe21b33885efa6d960ab110eb3806f46a"} Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.691780 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58c458694f-hz9v7" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.702676 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerStarted","Data":"6dd16837f3cd7d2aa5edb32f673943b6aa2776fd17a2b7566c3c525969125865"} Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.707589 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-76bdd6c54d-pgs2k"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.722133 4681 scope.go:117] "RemoveContainer" containerID="86749a8ef77b13356a38742661658ba86ea4df7ae380034de256760f04bbf0d6" Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.727431 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-76bdd6c54d-pgs2k"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.744724 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58c458694f-hz9v7"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.757630 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-58c458694f-hz9v7"] Nov 23 07:00:53 crc kubenswrapper[4681]: I1123 07:00:53.760920 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7hksl"] Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.712621 4681 generic.go:334] "Generic (PLEG): container finished" podID="7a1408c9-082d-4560-b82d-4d6b1124d6a5" containerID="5dc819c39d1aa87cd07aba4c5b2322043ce97f4b208163f9a901b77977345c5f" exitCode=0 Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.712718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a6d6-account-create-x9xqf" event={"ID":"7a1408c9-082d-4560-b82d-4d6b1124d6a5","Type":"ContainerDied","Data":"5dc819c39d1aa87cd07aba4c5b2322043ce97f4b208163f9a901b77977345c5f"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.713944 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a6d6-account-create-x9xqf" event={"ID":"7a1408c9-082d-4560-b82d-4d6b1124d6a5","Type":"ContainerStarted","Data":"cfd99d04b7fa132ccccf3982bd85f10d094b2582c8d9ae6d338345cf7a01e552"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.715123 4681 generic.go:334] "Generic (PLEG): container finished" podID="e3318c1e-062e-4748-b6e3-8db9ef610c97" containerID="5aae89a237e02aadb5f3ea4821ce5addfda55baa3f3081b1c4594f9048b8cc51" exitCode=0 Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.715157 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-11a1-account-create-9jrm5" event={"ID":"e3318c1e-062e-4748-b6e3-8db9ef610c97","Type":"ContainerDied","Data":"5aae89a237e02aadb5f3ea4821ce5addfda55baa3f3081b1c4594f9048b8cc51"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.715198 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-11a1-account-create-9jrm5" event={"ID":"e3318c1e-062e-4748-b6e3-8db9ef610c97","Type":"ContainerStarted","Data":"59b921e053a2c35291f8b7e229f6678c58eedea230f34314fdeeef131b80fafb"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.717331 4681 generic.go:334] "Generic (PLEG): container finished" podID="af391774-4ff4-48c7-a0ec-e11a85d772d5" containerID="991f745a0105a1be16fe82e34bbf424b0402e409ba06479813cce39477a68c43" exitCode=0 Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.717411 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2lvqj" event={"ID":"af391774-4ff4-48c7-a0ec-e11a85d772d5","Type":"ContainerDied","Data":"991f745a0105a1be16fe82e34bbf424b0402e409ba06479813cce39477a68c43"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.717448 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2lvqj" event={"ID":"af391774-4ff4-48c7-a0ec-e11a85d772d5","Type":"ContainerStarted","Data":"7aa4f099865aaf00462a09fbb46cb946844976197a687accdbcac228ca37a608"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.718757 4681 generic.go:334] "Generic (PLEG): container finished" podID="585ba06a-f87a-4133-a144-72545525b9a7" containerID="3a04dbefcd352a6b87bc8695b79e3b41abbf89680e4f1896c073b52fbb80c9e2" exitCode=0 Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.718826 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7hksl" event={"ID":"585ba06a-f87a-4133-a144-72545525b9a7","Type":"ContainerDied","Data":"3a04dbefcd352a6b87bc8695b79e3b41abbf89680e4f1896c073b52fbb80c9e2"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.719005 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7hksl" event={"ID":"585ba06a-f87a-4133-a144-72545525b9a7","Type":"ContainerStarted","Data":"cb27196e1eb9f1821f92f840a3db349e5037e8e24936b42e0e89a267a8d4c408"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.720941 4681 generic.go:334] "Generic (PLEG): container finished" podID="3811c7f9-b5f1-4f7c-a839-4c01f37baaf2" containerID="cbd1f08432329486fd56f087fa8782900772a396a9361c013495b0b9048ec87d" exitCode=0 Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.721059 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-679e-account-create-dr6pd" event={"ID":"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2","Type":"ContainerDied","Data":"cbd1f08432329486fd56f087fa8782900772a396a9361c013495b0b9048ec87d"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.721093 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-679e-account-create-dr6pd" event={"ID":"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2","Type":"ContainerStarted","Data":"a9d927a8cddeeced124da4e08d76d1108a7165715854e4aff60f8414fca8f0fd"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.723035 4681 generic.go:334] "Generic (PLEG): container finished" podID="c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a" containerID="4aa5d9f187537748e09310e6b8a577328c9bbcd9a5d12cebaca849b4df68534a" exitCode=0 Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.723074 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnsgf" event={"ID":"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a","Type":"ContainerDied","Data":"4aa5d9f187537748e09310e6b8a577328c9bbcd9a5d12cebaca849b4df68534a"} Nov 23 07:00:54 crc kubenswrapper[4681]: I1123 07:00:54.726073 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerStarted","Data":"47e60d65b81a38d7f6ab34690971679d8ab3e27fcd913b9ef7423550e41e54ba"} Nov 23 07:00:55 crc kubenswrapper[4681]: I1123 07:00:55.268039 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" path="/var/lib/kubelet/pods/e13a2ce8-368c-4e82-a354-dcc661a48644/volumes" Nov 23 07:00:55 crc kubenswrapper[4681]: I1123 07:00:55.268932 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" path="/var/lib/kubelet/pods/e37fed9f-f942-4518-857d-86c5b10f1bb5/volumes" Nov 23 07:00:55 crc kubenswrapper[4681]: I1123 07:00:55.303226 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6ddc7dd66-g2jvq" Nov 23 07:00:55 crc kubenswrapper[4681]: I1123 07:00:55.372290 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6bc44c9bc7-bkrp7"] Nov 23 07:00:55 crc kubenswrapper[4681]: I1123 07:00:55.372725 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" podUID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" containerName="heat-engine" containerID="cri-o://01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" gracePeriod=60 Nov 23 07:00:55 crc kubenswrapper[4681]: I1123 07:00:55.739749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerStarted","Data":"a1450b2faf397f3c4160252a54561e328262c9bf23111cedeee446379f78d2f4"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.221667 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.359918 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af391774-4ff4-48c7-a0ec-e11a85d772d5-operator-scripts\") pod \"af391774-4ff4-48c7-a0ec-e11a85d772d5\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.360047 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh8f2\" (UniqueName: \"kubernetes.io/projected/af391774-4ff4-48c7-a0ec-e11a85d772d5-kube-api-access-zh8f2\") pod \"af391774-4ff4-48c7-a0ec-e11a85d772d5\" (UID: \"af391774-4ff4-48c7-a0ec-e11a85d772d5\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.362377 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af391774-4ff4-48c7-a0ec-e11a85d772d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af391774-4ff4-48c7-a0ec-e11a85d772d5" (UID: "af391774-4ff4-48c7-a0ec-e11a85d772d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.385185 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af391774-4ff4-48c7-a0ec-e11a85d772d5-kube-api-access-zh8f2" (OuterVolumeSpecName: "kube-api-access-zh8f2") pod "af391774-4ff4-48c7-a0ec-e11a85d772d5" (UID: "af391774-4ff4-48c7-a0ec-e11a85d772d5"). InnerVolumeSpecName "kube-api-access-zh8f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.465598 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.467186 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af391774-4ff4-48c7-a0ec-e11a85d772d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.467217 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh8f2\" (UniqueName: \"kubernetes.io/projected/af391774-4ff4-48c7-a0ec-e11a85d772d5-kube-api-access-zh8f2\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.553616 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.571123 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q625h\" (UniqueName: \"kubernetes.io/projected/e3318c1e-062e-4748-b6e3-8db9ef610c97-kube-api-access-q625h\") pod \"e3318c1e-062e-4748-b6e3-8db9ef610c97\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.571229 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3318c1e-062e-4748-b6e3-8db9ef610c97-operator-scripts\") pod \"e3318c1e-062e-4748-b6e3-8db9ef610c97\" (UID: \"e3318c1e-062e-4748-b6e3-8db9ef610c97\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.574307 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3318c1e-062e-4748-b6e3-8db9ef610c97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3318c1e-062e-4748-b6e3-8db9ef610c97" (UID: "e3318c1e-062e-4748-b6e3-8db9ef610c97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.574916 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.579913 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3318c1e-062e-4748-b6e3-8db9ef610c97-kube-api-access-q625h" (OuterVolumeSpecName: "kube-api-access-q625h") pod "e3318c1e-062e-4748-b6e3-8db9ef610c97" (UID: "e3318c1e-062e-4748-b6e3-8db9ef610c97"). InnerVolumeSpecName "kube-api-access-q625h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.598378 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q625h\" (UniqueName: \"kubernetes.io/projected/e3318c1e-062e-4748-b6e3-8db9ef610c97-kube-api-access-q625h\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.598436 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3318c1e-062e-4748-b6e3-8db9ef610c97-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.601129 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.626640 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.699636 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585ba06a-f87a-4133-a144-72545525b9a7-operator-scripts\") pod \"585ba06a-f87a-4133-a144-72545525b9a7\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.699815 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z97s6\" (UniqueName: \"kubernetes.io/projected/585ba06a-f87a-4133-a144-72545525b9a7-kube-api-access-z97s6\") pod \"585ba06a-f87a-4133-a144-72545525b9a7\" (UID: \"585ba06a-f87a-4133-a144-72545525b9a7\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.699978 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1408c9-082d-4560-b82d-4d6b1124d6a5-operator-scripts\") pod \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.700183 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrjr\" (UniqueName: \"kubernetes.io/projected/7a1408c9-082d-4560-b82d-4d6b1124d6a5-kube-api-access-frrjr\") pod \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\" (UID: \"7a1408c9-082d-4560-b82d-4d6b1124d6a5\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.702530 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a1408c9-082d-4560-b82d-4d6b1124d6a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a1408c9-082d-4560-b82d-4d6b1124d6a5" (UID: "7a1408c9-082d-4560-b82d-4d6b1124d6a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.702602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585ba06a-f87a-4133-a144-72545525b9a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "585ba06a-f87a-4133-a144-72545525b9a7" (UID: "585ba06a-f87a-4133-a144-72545525b9a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.709620 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1408c9-082d-4560-b82d-4d6b1124d6a5-kube-api-access-frrjr" (OuterVolumeSpecName: "kube-api-access-frrjr") pod "7a1408c9-082d-4560-b82d-4d6b1124d6a5" (UID: "7a1408c9-082d-4560-b82d-4d6b1124d6a5"). InnerVolumeSpecName "kube-api-access-frrjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.712793 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585ba06a-f87a-4133-a144-72545525b9a7-kube-api-access-z97s6" (OuterVolumeSpecName: "kube-api-access-z97s6") pod "585ba06a-f87a-4133-a144-72545525b9a7" (UID: "585ba06a-f87a-4133-a144-72545525b9a7"). InnerVolumeSpecName "kube-api-access-z97s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.753678 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-11a1-account-create-9jrm5" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.755767 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-11a1-account-create-9jrm5" event={"ID":"e3318c1e-062e-4748-b6e3-8db9ef610c97","Type":"ContainerDied","Data":"59b921e053a2c35291f8b7e229f6678c58eedea230f34314fdeeef131b80fafb"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.755826 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b921e053a2c35291f8b7e229f6678c58eedea230f34314fdeeef131b80fafb" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.765304 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a6d6-account-create-x9xqf" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.769600 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a6d6-account-create-x9xqf" event={"ID":"7a1408c9-082d-4560-b82d-4d6b1124d6a5","Type":"ContainerDied","Data":"cfd99d04b7fa132ccccf3982bd85f10d094b2582c8d9ae6d338345cf7a01e552"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.769661 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd99d04b7fa132ccccf3982bd85f10d094b2582c8d9ae6d338345cf7a01e552" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.771121 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2lvqj" event={"ID":"af391774-4ff4-48c7-a0ec-e11a85d772d5","Type":"ContainerDied","Data":"7aa4f099865aaf00462a09fbb46cb946844976197a687accdbcac228ca37a608"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.771182 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa4f099865aaf00462a09fbb46cb946844976197a687accdbcac228ca37a608" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.771272 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lvqj" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.782602 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7hksl" event={"ID":"585ba06a-f87a-4133-a144-72545525b9a7","Type":"ContainerDied","Data":"cb27196e1eb9f1821f92f840a3db349e5037e8e24936b42e0e89a267a8d4c408"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.782630 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb27196e1eb9f1821f92f840a3db349e5037e8e24936b42e0e89a267a8d4c408" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.782683 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hksl" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.796350 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-679e-account-create-dr6pd" event={"ID":"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2","Type":"ContainerDied","Data":"a9d927a8cddeeced124da4e08d76d1108a7165715854e4aff60f8414fca8f0fd"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.796505 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d927a8cddeeced124da4e08d76d1108a7165715854e4aff60f8414fca8f0fd" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.796581 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-679e-account-create-dr6pd" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.798441 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnsgf" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.798505 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnsgf" event={"ID":"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a","Type":"ContainerDied","Data":"b8a836facdf3efc745820a3d2043be3df9be02ced71e1eb29541421e9860add3"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.798870 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a836facdf3efc745820a3d2043be3df9be02ced71e1eb29541421e9860add3" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.801036 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerStarted","Data":"d059bdb4e71a823807a22de31a783381b23147be075fe8925a3997c9b7773690"} Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.807890 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-operator-scripts\") pod \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.807928 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8shf\" (UniqueName: \"kubernetes.io/projected/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-kube-api-access-r8shf\") pod \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.807965 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pvkb\" (UniqueName: \"kubernetes.io/projected/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-kube-api-access-5pvkb\") pod \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\" (UID: \"c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.808012 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-operator-scripts\") pod \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\" (UID: \"3811c7f9-b5f1-4f7c-a839-4c01f37baaf2\") " Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.808978 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3811c7f9-b5f1-4f7c-a839-4c01f37baaf2" (UID: "3811c7f9-b5f1-4f7c-a839-4c01f37baaf2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.809254 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a" (UID: "c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.809275 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1408c9-082d-4560-b82d-4d6b1124d6a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.809295 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrjr\" (UniqueName: \"kubernetes.io/projected/7a1408c9-082d-4560-b82d-4d6b1124d6a5-kube-api-access-frrjr\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.809308 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585ba06a-f87a-4133-a144-72545525b9a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.809318 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z97s6\" (UniqueName: \"kubernetes.io/projected/585ba06a-f87a-4133-a144-72545525b9a7-kube-api-access-z97s6\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.814501 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-kube-api-access-r8shf" (OuterVolumeSpecName: "kube-api-access-r8shf") pod "3811c7f9-b5f1-4f7c-a839-4c01f37baaf2" (UID: "3811c7f9-b5f1-4f7c-a839-4c01f37baaf2"). InnerVolumeSpecName "kube-api-access-r8shf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.819933 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-kube-api-access-5pvkb" (OuterVolumeSpecName: "kube-api-access-5pvkb") pod "c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a" (UID: "c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a"). InnerVolumeSpecName "kube-api-access-5pvkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.910736 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.910772 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8shf\" (UniqueName: \"kubernetes.io/projected/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-kube-api-access-r8shf\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.910782 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pvkb\" (UniqueName: \"kubernetes.io/projected/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a-kube-api-access-5pvkb\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:56 crc kubenswrapper[4681]: I1123 07:00:56.910793 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:58 crc kubenswrapper[4681]: E1123 07:00:58.187194 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 07:00:58 crc kubenswrapper[4681]: E1123 07:00:58.189642 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 07:00:58 crc kubenswrapper[4681]: E1123 07:00:58.198689 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 07:00:58 crc kubenswrapper[4681]: E1123 07:00:58.198744 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" podUID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" containerName="heat-engine" Nov 23 07:00:58 crc kubenswrapper[4681]: I1123 07:00:58.822612 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerStarted","Data":"5ef725544e3de5ae5a0c31dcaf6fd7c8fbfbd61c3b26f533bd3cdb21b072bcd4"} Nov 23 07:00:58 crc kubenswrapper[4681]: I1123 07:00:58.822830 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:00:58 crc kubenswrapper[4681]: I1123 07:00:58.846762 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.589835844 podStartE2EDuration="7.846740784s" podCreationTimestamp="2025-11-23 07:00:51 +0000 UTC" firstStartedPulling="2025-11-23 07:00:53.355403365 +0000 UTC m=+990.424912592" lastFinishedPulling="2025-11-23 07:00:57.612308296 +0000 UTC m=+994.681817532" observedRunningTime="2025-11-23 07:00:58.844114773 +0000 UTC m=+995.913624010" watchObservedRunningTime="2025-11-23 07:00:58.846740784 +0000 UTC m=+995.916250022" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.147858 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29398021-rj9x7"] Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153071 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3811c7f9-b5f1-4f7c-a839-4c01f37baaf2" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153108 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3811c7f9-b5f1-4f7c-a839-4c01f37baaf2" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153126 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1408c9-082d-4560-b82d-4d6b1124d6a5" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153134 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1408c9-082d-4560-b82d-4d6b1124d6a5" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153152 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerName="heat-cfnapi" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153158 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerName="heat-cfnapi" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153167 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585ba06a-f87a-4133-a144-72545525b9a7" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153173 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="585ba06a-f87a-4133-a144-72545525b9a7" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153181 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerName="heat-api" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153187 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerName="heat-api" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153201 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerName="heat-cfnapi" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153207 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerName="heat-cfnapi" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153226 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153232 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153241 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af391774-4ff4-48c7-a0ec-e11a85d772d5" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153246 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="af391774-4ff4-48c7-a0ec-e11a85d772d5" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: E1123 07:01:00.153270 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3318c1e-062e-4748-b6e3-8db9ef610c97" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153277 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3318c1e-062e-4748-b6e3-8db9ef610c97" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153529 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="af391774-4ff4-48c7-a0ec-e11a85d772d5" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153556 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3811c7f9-b5f1-4f7c-a839-4c01f37baaf2" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153567 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerName="heat-api" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153576 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="585ba06a-f87a-4133-a144-72545525b9a7" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153589 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerName="heat-api" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153595 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a" containerName="mariadb-database-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153609 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3318c1e-062e-4748-b6e3-8db9ef610c97" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153622 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerName="heat-cfnapi" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153633 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37fed9f-f942-4518-857d-86c5b10f1bb5" containerName="heat-cfnapi" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.153644 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1408c9-082d-4560-b82d-4d6b1124d6a5" containerName="mariadb-account-create" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.154411 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.175588 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398021-rj9x7"] Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.288020 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-fernet-keys\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.288093 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdrr\" (UniqueName: \"kubernetes.io/projected/d5636bab-9428-473b-8dce-fcbc0c416f44-kube-api-access-2hdrr\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.288130 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-combined-ca-bundle\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.288239 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-config-data\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.390938 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-fernet-keys\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.391024 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdrr\" (UniqueName: \"kubernetes.io/projected/d5636bab-9428-473b-8dce-fcbc0c416f44-kube-api-access-2hdrr\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.391047 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-combined-ca-bundle\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.392398 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-config-data\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.402270 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-combined-ca-bundle\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.403059 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-config-data\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.415081 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-fernet-keys\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.420289 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdrr\" (UniqueName: \"kubernetes.io/projected/d5636bab-9428-473b-8dce-fcbc0c416f44-kube-api-access-2hdrr\") pod \"keystone-cron-29398021-rj9x7\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:00 crc kubenswrapper[4681]: I1123 07:01:00.474876 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:01 crc kubenswrapper[4681]: I1123 07:01:01.062720 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398021-rj9x7"] Nov 23 07:01:01 crc kubenswrapper[4681]: I1123 07:01:01.859639 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398021-rj9x7" event={"ID":"d5636bab-9428-473b-8dce-fcbc0c416f44","Type":"ContainerStarted","Data":"877e869d368c335212d8905f30d456ab4e3636971cecd2d3d382f7ed8a0bd495"} Nov 23 07:01:01 crc kubenswrapper[4681]: I1123 07:01:01.861247 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398021-rj9x7" event={"ID":"d5636bab-9428-473b-8dce-fcbc0c416f44","Type":"ContainerStarted","Data":"5e80ec0b91791bc5478c9a20e895dc0817eaa9a2766c9b150d222995505d586a"} Nov 23 07:01:01 crc kubenswrapper[4681]: I1123 07:01:01.893703 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 07:01:01 crc kubenswrapper[4681]: I1123 07:01:01.913329 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29398021-rj9x7" podStartSLOduration=1.9133090259999999 podStartE2EDuration="1.913309026s" podCreationTimestamp="2025-11-23 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:01.876825565 +0000 UTC m=+998.946334802" watchObservedRunningTime="2025-11-23 07:01:01.913309026 +0000 UTC m=+998.982818253" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.656625 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbt7w"] Nov 23 07:01:02 crc kubenswrapper[4681]: E1123 07:01:02.657048 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerName="heat-api" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.657066 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13a2ce8-368c-4e82-a354-dcc661a48644" containerName="heat-api" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.657932 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: W1123 07:01:02.661023 4681 reflector.go:561] object-"openstack"/"nova-cell0-conductor-scripts": failed to list *v1.Secret: secrets "nova-cell0-conductor-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 23 07:01:02 crc kubenswrapper[4681]: E1123 07:01:02.661072 4681 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 07:01:02 crc kubenswrapper[4681]: W1123 07:01:02.661158 4681 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-hscqd": failed to list *v1.Secret: secrets "nova-nova-dockercfg-hscqd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 23 07:01:02 crc kubenswrapper[4681]: E1123 07:01:02.661171 4681 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-hscqd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-nova-dockercfg-hscqd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 07:01:02 crc kubenswrapper[4681]: W1123 07:01:02.661201 4681 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: secrets "nova-cell0-conductor-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 23 07:01:02 crc kubenswrapper[4681]: E1123 07:01:02.661212 4681 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.673384 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbt7w"] Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.792286 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.792706 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-scripts\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.792797 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-config-data\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.792881 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99862\" (UniqueName: \"kubernetes.io/projected/8667103d-4a0c-4396-a403-d4be07f276cf-kube-api-access-99862\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.894822 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-config-data\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.894913 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99862\" (UniqueName: \"kubernetes.io/projected/8667103d-4a0c-4396-a403-d4be07f276cf-kube-api-access-99862\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.894994 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.895048 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-scripts\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.921491 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:02 crc kubenswrapper[4681]: I1123 07:01:02.925783 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99862\" (UniqueName: \"kubernetes.io/projected/8667103d-4a0c-4396-a403-d4be07f276cf-kube-api-access-99862\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:03 crc kubenswrapper[4681]: I1123 07:01:03.480598 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hscqd" Nov 23 07:01:03 crc kubenswrapper[4681]: I1123 07:01:03.633918 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 07:01:03 crc kubenswrapper[4681]: I1123 07:01:03.650220 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-config-data\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:03 crc kubenswrapper[4681]: I1123 07:01:03.743087 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 23 07:01:03 crc kubenswrapper[4681]: I1123 07:01:03.751934 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-scripts\") pod \"nova-cell0-conductor-db-sync-qbt7w\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:03 crc kubenswrapper[4681]: I1123 07:01:03.893585 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:04 crc kubenswrapper[4681]: I1123 07:01:04.414386 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbt7w"] Nov 23 07:01:04 crc kubenswrapper[4681]: I1123 07:01:04.669872 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-fcdb4576d-g8stp" Nov 23 07:01:04 crc kubenswrapper[4681]: I1123 07:01:04.746673 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c48d564b8-5tf9h"] Nov 23 07:01:04 crc kubenswrapper[4681]: I1123 07:01:04.747186 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c48d564b8-5tf9h" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon-log" containerID="cri-o://31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903" gracePeriod=30 Nov 23 07:01:04 crc kubenswrapper[4681]: I1123 07:01:04.747257 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c48d564b8-5tf9h" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" containerID="cri-o://f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad" gracePeriod=30 Nov 23 07:01:04 crc kubenswrapper[4681]: I1123 07:01:04.941622 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" event={"ID":"8667103d-4a0c-4396-a403-d4be07f276cf","Type":"ContainerStarted","Data":"2ffb226bec0a521ea430891f9f72fa193c622d83a9550670aa1aa2a7fb7a0c8f"} Nov 23 07:01:05 crc kubenswrapper[4681]: I1123 07:01:05.960790 4681 generic.go:334] "Generic (PLEG): container finished" podID="d5636bab-9428-473b-8dce-fcbc0c416f44" containerID="877e869d368c335212d8905f30d456ab4e3636971cecd2d3d382f7ed8a0bd495" exitCode=0 Nov 23 07:01:05 crc kubenswrapper[4681]: I1123 07:01:05.960847 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398021-rj9x7" event={"ID":"d5636bab-9428-473b-8dce-fcbc0c416f44","Type":"ContainerDied","Data":"877e869d368c335212d8905f30d456ab4e3636971cecd2d3d382f7ed8a0bd495"} Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.317446 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.412244 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-fernet-keys\") pod \"d5636bab-9428-473b-8dce-fcbc0c416f44\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.412371 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hdrr\" (UniqueName: \"kubernetes.io/projected/d5636bab-9428-473b-8dce-fcbc0c416f44-kube-api-access-2hdrr\") pod \"d5636bab-9428-473b-8dce-fcbc0c416f44\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.412434 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-combined-ca-bundle\") pod \"d5636bab-9428-473b-8dce-fcbc0c416f44\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.412747 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-config-data\") pod \"d5636bab-9428-473b-8dce-fcbc0c416f44\" (UID: \"d5636bab-9428-473b-8dce-fcbc0c416f44\") " Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.418202 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d5636bab-9428-473b-8dce-fcbc0c416f44" (UID: "d5636bab-9428-473b-8dce-fcbc0c416f44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.436514 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5636bab-9428-473b-8dce-fcbc0c416f44-kube-api-access-2hdrr" (OuterVolumeSpecName: "kube-api-access-2hdrr") pod "d5636bab-9428-473b-8dce-fcbc0c416f44" (UID: "d5636bab-9428-473b-8dce-fcbc0c416f44"). InnerVolumeSpecName "kube-api-access-2hdrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.445628 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5636bab-9428-473b-8dce-fcbc0c416f44" (UID: "d5636bab-9428-473b-8dce-fcbc0c416f44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.489592 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-config-data" (OuterVolumeSpecName: "config-data") pod "d5636bab-9428-473b-8dce-fcbc0c416f44" (UID: "d5636bab-9428-473b-8dce-fcbc0c416f44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.515695 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hdrr\" (UniqueName: \"kubernetes.io/projected/d5636bab-9428-473b-8dce-fcbc0c416f44-kube-api-access-2hdrr\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.515727 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.515739 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:07 crc kubenswrapper[4681]: I1123 07:01:07.515748 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5636bab-9428-473b-8dce-fcbc0c416f44-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.004547 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398021-rj9x7" event={"ID":"d5636bab-9428-473b-8dce-fcbc0c416f44","Type":"ContainerDied","Data":"5e80ec0b91791bc5478c9a20e895dc0817eaa9a2766c9b150d222995505d586a"} Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.004609 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e80ec0b91791bc5478c9a20e895dc0817eaa9a2766c9b150d222995505d586a" Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.004701 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398021-rj9x7" Nov 23 07:01:08 crc kubenswrapper[4681]: E1123 07:01:08.191912 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 07:01:08 crc kubenswrapper[4681]: E1123 07:01:08.194926 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 07:01:08 crc kubenswrapper[4681]: E1123 07:01:08.197217 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 07:01:08 crc kubenswrapper[4681]: E1123 07:01:08.197275 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" podUID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" containerName="heat-engine" Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.749941 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.750239 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-central-agent" containerID="cri-o://47e60d65b81a38d7f6ab34690971679d8ab3e27fcd913b9ef7423550e41e54ba" gracePeriod=30 Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.751010 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="proxy-httpd" containerID="cri-o://5ef725544e3de5ae5a0c31dcaf6fd7c8fbfbd61c3b26f533bd3cdb21b072bcd4" gracePeriod=30 Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.751082 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="sg-core" containerID="cri-o://d059bdb4e71a823807a22de31a783381b23147be075fe8925a3997c9b7773690" gracePeriod=30 Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.751076 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-notification-agent" containerID="cri-o://a1450b2faf397f3c4160252a54561e328262c9bf23111cedeee446379f78d2f4" gracePeriod=30 Nov 23 07:01:08 crc kubenswrapper[4681]: I1123 07:01:08.765333 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.190:3000/\": EOF" Nov 23 07:01:09 crc kubenswrapper[4681]: I1123 07:01:09.042483 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c48d564b8-5tf9h" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Nov 23 07:01:09 crc kubenswrapper[4681]: I1123 07:01:09.044830 4681 generic.go:334] "Generic (PLEG): container finished" podID="21819725-3a3a-448c-8bda-e78701b78360" containerID="f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad" exitCode=0 Nov 23 07:01:09 crc kubenswrapper[4681]: I1123 07:01:09.046267 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c48d564b8-5tf9h" event={"ID":"21819725-3a3a-448c-8bda-e78701b78360","Type":"ContainerDied","Data":"f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad"} Nov 23 07:01:09 crc kubenswrapper[4681]: I1123 07:01:09.056430 4681 generic.go:334] "Generic (PLEG): container finished" podID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerID="5ef725544e3de5ae5a0c31dcaf6fd7c8fbfbd61c3b26f533bd3cdb21b072bcd4" exitCode=0 Nov 23 07:01:09 crc kubenswrapper[4681]: I1123 07:01:09.056537 4681 generic.go:334] "Generic (PLEG): container finished" podID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerID="d059bdb4e71a823807a22de31a783381b23147be075fe8925a3997c9b7773690" exitCode=2 Nov 23 07:01:09 crc kubenswrapper[4681]: I1123 07:01:09.056523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerDied","Data":"5ef725544e3de5ae5a0c31dcaf6fd7c8fbfbd61c3b26f533bd3cdb21b072bcd4"} Nov 23 07:01:09 crc kubenswrapper[4681]: I1123 07:01:09.056608 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerDied","Data":"d059bdb4e71a823807a22de31a783381b23147be075fe8925a3997c9b7773690"} Nov 23 07:01:10 crc kubenswrapper[4681]: I1123 07:01:10.068400 4681 generic.go:334] "Generic (PLEG): container finished" podID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerID="a1450b2faf397f3c4160252a54561e328262c9bf23111cedeee446379f78d2f4" exitCode=0 Nov 23 07:01:10 crc kubenswrapper[4681]: I1123 07:01:10.068704 4681 generic.go:334] "Generic (PLEG): container finished" podID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerID="47e60d65b81a38d7f6ab34690971679d8ab3e27fcd913b9ef7423550e41e54ba" exitCode=0 Nov 23 07:01:10 crc kubenswrapper[4681]: I1123 07:01:10.068727 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerDied","Data":"a1450b2faf397f3c4160252a54561e328262c9bf23111cedeee446379f78d2f4"} Nov 23 07:01:10 crc kubenswrapper[4681]: I1123 07:01:10.068755 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerDied","Data":"47e60d65b81a38d7f6ab34690971679d8ab3e27fcd913b9ef7423550e41e54ba"} Nov 23 07:01:12 crc kubenswrapper[4681]: I1123 07:01:12.127855 4681 generic.go:334] "Generic (PLEG): container finished" podID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" exitCode=0 Nov 23 07:01:12 crc kubenswrapper[4681]: I1123 07:01:12.127904 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" event={"ID":"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1","Type":"ContainerDied","Data":"01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055"} Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.112750 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.174958 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-config-data\") pod \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.175022 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-scripts\") pod \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.175061 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-sg-core-conf-yaml\") pod \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.175110 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-combined-ca-bundle\") pod \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.175147 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgvl\" (UniqueName: \"kubernetes.io/projected/10a8d789-78e1-40d8-ae1a-af64558b8dfc-kube-api-access-llgvl\") pod \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.175229 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-log-httpd\") pod \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.175282 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-run-httpd\") pod \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\" (UID: \"10a8d789-78e1-40d8-ae1a-af64558b8dfc\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.176236 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "10a8d789-78e1-40d8-ae1a-af64558b8dfc" (UID: "10a8d789-78e1-40d8-ae1a-af64558b8dfc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.177558 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "10a8d789-78e1-40d8-ae1a-af64558b8dfc" (UID: "10a8d789-78e1-40d8-ae1a-af64558b8dfc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.182835 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-scripts" (OuterVolumeSpecName: "scripts") pod "10a8d789-78e1-40d8-ae1a-af64558b8dfc" (UID: "10a8d789-78e1-40d8-ae1a-af64558b8dfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.191955 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a8d789-78e1-40d8-ae1a-af64558b8dfc-kube-api-access-llgvl" (OuterVolumeSpecName: "kube-api-access-llgvl") pod "10a8d789-78e1-40d8-ae1a-af64558b8dfc" (UID: "10a8d789-78e1-40d8-ae1a-af64558b8dfc"). InnerVolumeSpecName "kube-api-access-llgvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.205833 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10a8d789-78e1-40d8-ae1a-af64558b8dfc","Type":"ContainerDied","Data":"6dd16837f3cd7d2aa5edb32f673943b6aa2776fd17a2b7566c3c525969125865"} Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.205900 4681 scope.go:117] "RemoveContainer" containerID="5ef725544e3de5ae5a0c31dcaf6fd7c8fbfbd61c3b26f533bd3cdb21b072bcd4" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.205982 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.269121 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "10a8d789-78e1-40d8-ae1a-af64558b8dfc" (UID: "10a8d789-78e1-40d8-ae1a-af64558b8dfc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.285828 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.285868 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.285885 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llgvl\" (UniqueName: \"kubernetes.io/projected/10a8d789-78e1-40d8-ae1a-af64558b8dfc-kube-api-access-llgvl\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.285900 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.285912 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10a8d789-78e1-40d8-ae1a-af64558b8dfc-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.293015 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.303734 4681 scope.go:117] "RemoveContainer" containerID="d059bdb4e71a823807a22de31a783381b23147be075fe8925a3997c9b7773690" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.332316 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10a8d789-78e1-40d8-ae1a-af64558b8dfc" (UID: "10a8d789-78e1-40d8-ae1a-af64558b8dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.334890 4681 scope.go:117] "RemoveContainer" containerID="a1450b2faf397f3c4160252a54561e328262c9bf23111cedeee446379f78d2f4" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.349961 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-config-data" (OuterVolumeSpecName: "config-data") pod "10a8d789-78e1-40d8-ae1a-af64558b8dfc" (UID: "10a8d789-78e1-40d8-ae1a-af64558b8dfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.386566 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data-custom\") pod \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.386642 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-combined-ca-bundle\") pod \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.386677 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtmtg\" (UniqueName: \"kubernetes.io/projected/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-kube-api-access-jtmtg\") pod \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.386767 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data\") pod \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\" (UID: \"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1\") " Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.387199 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.387218 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8d789-78e1-40d8-ae1a-af64558b8dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.387905 4681 scope.go:117] "RemoveContainer" containerID="47e60d65b81a38d7f6ab34690971679d8ab3e27fcd913b9ef7423550e41e54ba" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.398574 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" (UID: "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.412791 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-kube-api-access-jtmtg" (OuterVolumeSpecName: "kube-api-access-jtmtg") pod "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" (UID: "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1"). InnerVolumeSpecName "kube-api-access-jtmtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.456216 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" (UID: "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.474056 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data" (OuterVolumeSpecName: "config-data") pod "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" (UID: "0b6259f0-ca09-4fc2-bada-7d505bf1b5a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.489857 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.489882 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.489894 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtmtg\" (UniqueName: \"kubernetes.io/projected/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-kube-api-access-jtmtg\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.489903 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.534627 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.549419 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.564604 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:17 crc kubenswrapper[4681]: E1123 07:01:17.565083 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" containerName="heat-engine" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565106 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" containerName="heat-engine" Nov 23 07:01:17 crc kubenswrapper[4681]: E1123 07:01:17.565124 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="sg-core" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565130 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="sg-core" Nov 23 07:01:17 crc kubenswrapper[4681]: E1123 07:01:17.565155 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-notification-agent" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565161 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-notification-agent" Nov 23 07:01:17 crc kubenswrapper[4681]: E1123 07:01:17.565169 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5636bab-9428-473b-8dce-fcbc0c416f44" containerName="keystone-cron" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565177 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5636bab-9428-473b-8dce-fcbc0c416f44" containerName="keystone-cron" Nov 23 07:01:17 crc kubenswrapper[4681]: E1123 07:01:17.565196 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-central-agent" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565203 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-central-agent" Nov 23 07:01:17 crc kubenswrapper[4681]: E1123 07:01:17.565216 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="proxy-httpd" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565222 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="proxy-httpd" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565390 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="proxy-httpd" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565408 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" containerName="heat-engine" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565415 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-notification-agent" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565425 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5636bab-9428-473b-8dce-fcbc0c416f44" containerName="keystone-cron" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565435 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="sg-core" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.565448 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" containerName="ceilometer-central-agent" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.567070 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.571600 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.572331 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.574540 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.595403 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.595445 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjp9h\" (UniqueName: \"kubernetes.io/projected/462b970b-ce7f-444d-840e-3117d130e01c-kube-api-access-tjp9h\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.595571 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-config-data\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.595616 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-log-httpd\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.595649 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-run-httpd\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.595669 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-scripts\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.595702 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.697813 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-run-httpd\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.697862 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-scripts\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.698381 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-run-httpd\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.698585 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.698654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.698713 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjp9h\" (UniqueName: \"kubernetes.io/projected/462b970b-ce7f-444d-840e-3117d130e01c-kube-api-access-tjp9h\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.698915 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-config-data\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.698968 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-log-httpd\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.699315 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-log-httpd\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.702378 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.702620 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-scripts\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.703102 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.709340 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-config-data\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.722080 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjp9h\" (UniqueName: \"kubernetes.io/projected/462b970b-ce7f-444d-840e-3117d130e01c-kube-api-access-tjp9h\") pod \"ceilometer-0\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " pod="openstack/ceilometer-0" Nov 23 07:01:17 crc kubenswrapper[4681]: I1123 07:01:17.887897 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.217048 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" event={"ID":"8667103d-4a0c-4396-a403-d4be07f276cf","Type":"ContainerStarted","Data":"79101d9508bf5f5c66972e162c27631ae5850a756be29d2b85a8a3ce7cdf3679"} Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.222096 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" event={"ID":"0b6259f0-ca09-4fc2-bada-7d505bf1b5a1","Type":"ContainerDied","Data":"b409e141d2f359b90eac470540fde2f906ab371e6d0e2cf3a6413d305fbd022e"} Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.222138 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6bc44c9bc7-bkrp7" Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.222178 4681 scope.go:117] "RemoveContainer" containerID="01fa57dd402c8eee8ac14e1d90dc47f67437bbd118411fb55a41875e2a702055" Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.252400 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" podStartSLOduration=3.682509127 podStartE2EDuration="16.252384877s" podCreationTimestamp="2025-11-23 07:01:02 +0000 UTC" firstStartedPulling="2025-11-23 07:01:04.436336032 +0000 UTC m=+1001.505845269" lastFinishedPulling="2025-11-23 07:01:17.006211783 +0000 UTC m=+1014.075721019" observedRunningTime="2025-11-23 07:01:18.23817696 +0000 UTC m=+1015.307686197" watchObservedRunningTime="2025-11-23 07:01:18.252384877 +0000 UTC m=+1015.321894113" Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.270293 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6bc44c9bc7-bkrp7"] Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.276706 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6bc44c9bc7-bkrp7"] Nov 23 07:01:18 crc kubenswrapper[4681]: I1123 07:01:18.322669 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:18 crc kubenswrapper[4681]: W1123 07:01:18.327890 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462b970b_ce7f_444d_840e_3117d130e01c.slice/crio-9e72c085acec57334b2bc82f4bbf8675003e7057751a9d140b79dd96ea86ee1d WatchSource:0}: Error finding container 9e72c085acec57334b2bc82f4bbf8675003e7057751a9d140b79dd96ea86ee1d: Status 404 returned error can't find the container with id 9e72c085acec57334b2bc82f4bbf8675003e7057751a9d140b79dd96ea86ee1d Nov 23 07:01:19 crc kubenswrapper[4681]: I1123 07:01:19.042522 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c48d564b8-5tf9h" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Nov 23 07:01:19 crc kubenswrapper[4681]: I1123 07:01:19.237883 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerStarted","Data":"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3"} Nov 23 07:01:19 crc kubenswrapper[4681]: I1123 07:01:19.238259 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerStarted","Data":"9e72c085acec57334b2bc82f4bbf8675003e7057751a9d140b79dd96ea86ee1d"} Nov 23 07:01:19 crc kubenswrapper[4681]: I1123 07:01:19.266609 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6259f0-ca09-4fc2-bada-7d505bf1b5a1" path="/var/lib/kubelet/pods/0b6259f0-ca09-4fc2-bada-7d505bf1b5a1/volumes" Nov 23 07:01:19 crc kubenswrapper[4681]: I1123 07:01:19.267640 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a8d789-78e1-40d8-ae1a-af64558b8dfc" path="/var/lib/kubelet/pods/10a8d789-78e1-40d8-ae1a-af64558b8dfc/volumes" Nov 23 07:01:20 crc kubenswrapper[4681]: I1123 07:01:20.252337 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerStarted","Data":"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343"} Nov 23 07:01:21 crc kubenswrapper[4681]: I1123 07:01:21.266485 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerStarted","Data":"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364"} Nov 23 07:01:22 crc kubenswrapper[4681]: I1123 07:01:22.277483 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerStarted","Data":"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285"} Nov 23 07:01:22 crc kubenswrapper[4681]: I1123 07:01:22.277880 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:01:22 crc kubenswrapper[4681]: I1123 07:01:22.301300 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6889862949999999 podStartE2EDuration="5.301284457s" podCreationTimestamp="2025-11-23 07:01:17 +0000 UTC" firstStartedPulling="2025-11-23 07:01:18.330401677 +0000 UTC m=+1015.399910903" lastFinishedPulling="2025-11-23 07:01:21.942699828 +0000 UTC m=+1019.012209065" observedRunningTime="2025-11-23 07:01:22.294210981 +0000 UTC m=+1019.363720218" watchObservedRunningTime="2025-11-23 07:01:22.301284457 +0000 UTC m=+1019.370793693" Nov 23 07:01:24 crc kubenswrapper[4681]: I1123 07:01:24.294753 4681 generic.go:334] "Generic (PLEG): container finished" podID="8667103d-4a0c-4396-a403-d4be07f276cf" containerID="79101d9508bf5f5c66972e162c27631ae5850a756be29d2b85a8a3ce7cdf3679" exitCode=0 Nov 23 07:01:24 crc kubenswrapper[4681]: I1123 07:01:24.296044 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" event={"ID":"8667103d-4a0c-4396-a403-d4be07f276cf","Type":"ContainerDied","Data":"79101d9508bf5f5c66972e162c27631ae5850a756be29d2b85a8a3ce7cdf3679"} Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.625577 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.670855 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-combined-ca-bundle\") pod \"8667103d-4a0c-4396-a403-d4be07f276cf\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.670924 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-config-data\") pod \"8667103d-4a0c-4396-a403-d4be07f276cf\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.670959 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-scripts\") pod \"8667103d-4a0c-4396-a403-d4be07f276cf\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.671190 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99862\" (UniqueName: \"kubernetes.io/projected/8667103d-4a0c-4396-a403-d4be07f276cf-kube-api-access-99862\") pod \"8667103d-4a0c-4396-a403-d4be07f276cf\" (UID: \"8667103d-4a0c-4396-a403-d4be07f276cf\") " Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.681611 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8667103d-4a0c-4396-a403-d4be07f276cf-kube-api-access-99862" (OuterVolumeSpecName: "kube-api-access-99862") pod "8667103d-4a0c-4396-a403-d4be07f276cf" (UID: "8667103d-4a0c-4396-a403-d4be07f276cf"). InnerVolumeSpecName "kube-api-access-99862". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.684409 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-scripts" (OuterVolumeSpecName: "scripts") pod "8667103d-4a0c-4396-a403-d4be07f276cf" (UID: "8667103d-4a0c-4396-a403-d4be07f276cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.718376 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-config-data" (OuterVolumeSpecName: "config-data") pod "8667103d-4a0c-4396-a403-d4be07f276cf" (UID: "8667103d-4a0c-4396-a403-d4be07f276cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.743567 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8667103d-4a0c-4396-a403-d4be07f276cf" (UID: "8667103d-4a0c-4396-a403-d4be07f276cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.774253 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99862\" (UniqueName: \"kubernetes.io/projected/8667103d-4a0c-4396-a403-d4be07f276cf-kube-api-access-99862\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.774283 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.774293 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:25 crc kubenswrapper[4681]: I1123 07:01:25.774304 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8667103d-4a0c-4396-a403-d4be07f276cf-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.322306 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" event={"ID":"8667103d-4a0c-4396-a403-d4be07f276cf","Type":"ContainerDied","Data":"2ffb226bec0a521ea430891f9f72fa193c622d83a9550670aa1aa2a7fb7a0c8f"} Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.322351 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ffb226bec0a521ea430891f9f72fa193c622d83a9550670aa1aa2a7fb7a0c8f" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.322425 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbt7w" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.410662 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:01:26 crc kubenswrapper[4681]: E1123 07:01:26.411147 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8667103d-4a0c-4396-a403-d4be07f276cf" containerName="nova-cell0-conductor-db-sync" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.411167 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8667103d-4a0c-4396-a403-d4be07f276cf" containerName="nova-cell0-conductor-db-sync" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.411440 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8667103d-4a0c-4396-a403-d4be07f276cf" containerName="nova-cell0-conductor-db-sync" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.412262 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.414925 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hscqd" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.416932 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.433210 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.495505 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7178422-9a7a-47aa-b651-113534bebf26-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.495573 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2746n\" (UniqueName: \"kubernetes.io/projected/a7178422-9a7a-47aa-b651-113534bebf26-kube-api-access-2746n\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.495851 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7178422-9a7a-47aa-b651-113534bebf26-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.598613 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7178422-9a7a-47aa-b651-113534bebf26-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.598744 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2746n\" (UniqueName: \"kubernetes.io/projected/a7178422-9a7a-47aa-b651-113534bebf26-kube-api-access-2746n\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.598827 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7178422-9a7a-47aa-b651-113534bebf26-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.604540 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7178422-9a7a-47aa-b651-113534bebf26-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.611076 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7178422-9a7a-47aa-b651-113534bebf26-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.615616 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2746n\" (UniqueName: \"kubernetes.io/projected/a7178422-9a7a-47aa-b651-113534bebf26-kube-api-access-2746n\") pod \"nova-cell0-conductor-0\" (UID: \"a7178422-9a7a-47aa-b651-113534bebf26\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:26 crc kubenswrapper[4681]: I1123 07:01:26.729767 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.014395 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.015536 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="sg-core" containerID="cri-o://4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364" gracePeriod=30 Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.015565 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-notification-agent" containerID="cri-o://0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343" gracePeriod=30 Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.015556 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="proxy-httpd" containerID="cri-o://bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285" gracePeriod=30 Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.015543 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-central-agent" containerID="cri-o://5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3" gracePeriod=30 Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.204477 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:01:27 crc kubenswrapper[4681]: W1123 07:01:27.204854 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7178422_9a7a_47aa_b651_113534bebf26.slice/crio-3095cd3a5ffd6ad3bffbd5c074c1293dbc0acc77a6ed404939eddd1b002f0be0 WatchSource:0}: Error finding container 3095cd3a5ffd6ad3bffbd5c074c1293dbc0acc77a6ed404939eddd1b002f0be0: Status 404 returned error can't find the container with id 3095cd3a5ffd6ad3bffbd5c074c1293dbc0acc77a6ed404939eddd1b002f0be0 Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.334738 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a7178422-9a7a-47aa-b651-113534bebf26","Type":"ContainerStarted","Data":"3095cd3a5ffd6ad3bffbd5c074c1293dbc0acc77a6ed404939eddd1b002f0be0"} Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.338949 4681 generic.go:334] "Generic (PLEG): container finished" podID="462b970b-ce7f-444d-840e-3117d130e01c" containerID="bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285" exitCode=0 Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.338998 4681 generic.go:334] "Generic (PLEG): container finished" podID="462b970b-ce7f-444d-840e-3117d130e01c" containerID="4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364" exitCode=2 Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.339040 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerDied","Data":"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285"} Nov 23 07:01:27 crc kubenswrapper[4681]: I1123 07:01:27.339105 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerDied","Data":"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364"} Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.059747 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129214 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-scripts\") pod \"462b970b-ce7f-444d-840e-3117d130e01c\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129381 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-log-httpd\") pod \"462b970b-ce7f-444d-840e-3117d130e01c\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129434 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjp9h\" (UniqueName: \"kubernetes.io/projected/462b970b-ce7f-444d-840e-3117d130e01c-kube-api-access-tjp9h\") pod \"462b970b-ce7f-444d-840e-3117d130e01c\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129512 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-combined-ca-bundle\") pod \"462b970b-ce7f-444d-840e-3117d130e01c\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129636 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-run-httpd\") pod \"462b970b-ce7f-444d-840e-3117d130e01c\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129682 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-sg-core-conf-yaml\") pod \"462b970b-ce7f-444d-840e-3117d130e01c\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129729 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-config-data\") pod \"462b970b-ce7f-444d-840e-3117d130e01c\" (UID: \"462b970b-ce7f-444d-840e-3117d130e01c\") " Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.129959 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "462b970b-ce7f-444d-840e-3117d130e01c" (UID: "462b970b-ce7f-444d-840e-3117d130e01c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.130623 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.130812 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "462b970b-ce7f-444d-840e-3117d130e01c" (UID: "462b970b-ce7f-444d-840e-3117d130e01c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.144274 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-scripts" (OuterVolumeSpecName: "scripts") pod "462b970b-ce7f-444d-840e-3117d130e01c" (UID: "462b970b-ce7f-444d-840e-3117d130e01c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.144492 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462b970b-ce7f-444d-840e-3117d130e01c-kube-api-access-tjp9h" (OuterVolumeSpecName: "kube-api-access-tjp9h") pod "462b970b-ce7f-444d-840e-3117d130e01c" (UID: "462b970b-ce7f-444d-840e-3117d130e01c"). InnerVolumeSpecName "kube-api-access-tjp9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.159665 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "462b970b-ce7f-444d-840e-3117d130e01c" (UID: "462b970b-ce7f-444d-840e-3117d130e01c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.201796 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-config-data" (OuterVolumeSpecName: "config-data") pod "462b970b-ce7f-444d-840e-3117d130e01c" (UID: "462b970b-ce7f-444d-840e-3117d130e01c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.209194 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462b970b-ce7f-444d-840e-3117d130e01c" (UID: "462b970b-ce7f-444d-840e-3117d130e01c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.232294 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjp9h\" (UniqueName: \"kubernetes.io/projected/462b970b-ce7f-444d-840e-3117d130e01c-kube-api-access-tjp9h\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.232542 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.232556 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/462b970b-ce7f-444d-840e-3117d130e01c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.232565 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.232577 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.232586 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462b970b-ce7f-444d-840e-3117d130e01c-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.351527 4681 generic.go:334] "Generic (PLEG): container finished" podID="462b970b-ce7f-444d-840e-3117d130e01c" containerID="0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343" exitCode=0 Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.351559 4681 generic.go:334] "Generic (PLEG): container finished" podID="462b970b-ce7f-444d-840e-3117d130e01c" containerID="5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3" exitCode=0 Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.351598 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerDied","Data":"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343"} Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.351642 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerDied","Data":"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3"} Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.351655 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"462b970b-ce7f-444d-840e-3117d130e01c","Type":"ContainerDied","Data":"9e72c085acec57334b2bc82f4bbf8675003e7057751a9d140b79dd96ea86ee1d"} Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.351673 4681 scope.go:117] "RemoveContainer" containerID="bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.351799 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.356177 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a7178422-9a7a-47aa-b651-113534bebf26","Type":"ContainerStarted","Data":"2a40fcc0372badfaa48be257f9f8999908b98bf4db579245afe5da991ce75352"} Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.356727 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.375470 4681 scope.go:117] "RemoveContainer" containerID="4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.398572 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.3985537900000002 podStartE2EDuration="2.39855379s" podCreationTimestamp="2025-11-23 07:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:28.388032606 +0000 UTC m=+1025.457541833" watchObservedRunningTime="2025-11-23 07:01:28.39855379 +0000 UTC m=+1025.468063027" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.405970 4681 scope.go:117] "RemoveContainer" containerID="0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.408614 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.415359 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.423412 4681 scope.go:117] "RemoveContainer" containerID="5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.433072 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.435675 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-central-agent" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.435771 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-central-agent" Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.435848 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-notification-agent" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.435904 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-notification-agent" Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.435975 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="proxy-httpd" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.436022 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="proxy-httpd" Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.436083 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="sg-core" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.436154 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="sg-core" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.436377 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-central-agent" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.436437 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="proxy-httpd" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.436506 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="sg-core" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.436563 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="462b970b-ce7f-444d-840e-3117d130e01c" containerName="ceilometer-notification-agent" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.438399 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.442286 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.442493 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.447208 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.448867 4681 scope.go:117] "RemoveContainer" containerID="bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285" Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.449416 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285\": container with ID starting with bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285 not found: ID does not exist" containerID="bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.449445 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285"} err="failed to get container status \"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285\": rpc error: code = NotFound desc = could not find container \"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285\": container with ID starting with bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.449502 4681 scope.go:117] "RemoveContainer" containerID="4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364" Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.451611 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364\": container with ID starting with 4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364 not found: ID does not exist" containerID="4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.451655 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364"} err="failed to get container status \"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364\": rpc error: code = NotFound desc = could not find container \"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364\": container with ID starting with 4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.451689 4681 scope.go:117] "RemoveContainer" containerID="0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343" Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.452308 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343\": container with ID starting with 0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343 not found: ID does not exist" containerID="0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.452343 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343"} err="failed to get container status \"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343\": rpc error: code = NotFound desc = could not find container \"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343\": container with ID starting with 0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.452370 4681 scope.go:117] "RemoveContainer" containerID="5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3" Nov 23 07:01:28 crc kubenswrapper[4681]: E1123 07:01:28.452655 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3\": container with ID starting with 5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3 not found: ID does not exist" containerID="5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.452677 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3"} err="failed to get container status \"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3\": rpc error: code = NotFound desc = could not find container \"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3\": container with ID starting with 5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.452699 4681 scope.go:117] "RemoveContainer" containerID="bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.459881 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285"} err="failed to get container status \"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285\": rpc error: code = NotFound desc = could not find container \"bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285\": container with ID starting with bc90d3b37f9240320976164e25c12688fb8c523f935d0dc3a797b11103f89285 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.459914 4681 scope.go:117] "RemoveContainer" containerID="4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.460230 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364"} err="failed to get container status \"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364\": rpc error: code = NotFound desc = could not find container \"4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364\": container with ID starting with 4a01ec35f737dbbc453219ed7b9dd93ff5d3b52449fd3a95c2954634300f2364 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.460251 4681 scope.go:117] "RemoveContainer" containerID="0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.460576 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343"} err="failed to get container status \"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343\": rpc error: code = NotFound desc = could not find container \"0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343\": container with ID starting with 0817a43867954a5612c0312ca30c8b78f665924487e0460c81fbe994b61a2343 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.460765 4681 scope.go:117] "RemoveContainer" containerID="5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.461900 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3"} err="failed to get container status \"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3\": rpc error: code = NotFound desc = could not find container \"5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3\": container with ID starting with 5a9e1569284dda0fbdbc0d013eb966710028eb280106ccb5d76b4fb0801a8dc3 not found: ID does not exist" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.537486 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vwk\" (UniqueName: \"kubernetes.io/projected/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-kube-api-access-b5vwk\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.537536 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.537572 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.537613 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.537908 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-config-data\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.537975 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-scripts\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.538014 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.638721 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-config-data\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639342 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639373 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-scripts\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639437 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5vwk\" (UniqueName: \"kubernetes.io/projected/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-kube-api-access-b5vwk\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639482 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639508 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639535 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.639995 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.644622 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.644663 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-scripts\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.646305 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.646654 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-config-data\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.656220 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5vwk\" (UniqueName: \"kubernetes.io/projected/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-kube-api-access-b5vwk\") pod \"ceilometer-0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " pod="openstack/ceilometer-0" Nov 23 07:01:28 crc kubenswrapper[4681]: I1123 07:01:28.760705 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:01:29 crc kubenswrapper[4681]: I1123 07:01:29.042638 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c48d564b8-5tf9h" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Nov 23 07:01:29 crc kubenswrapper[4681]: I1123 07:01:29.043035 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 07:01:29 crc kubenswrapper[4681]: W1123 07:01:29.223476 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5adbedf4_bd97_43af_a48b_b5e10ebff5b0.slice/crio-003df355a62d9c918a17ce7d5f1372e07689254b58dbdbf67e6e39cb4317afe8 WatchSource:0}: Error finding container 003df355a62d9c918a17ce7d5f1372e07689254b58dbdbf67e6e39cb4317afe8: Status 404 returned error can't find the container with id 003df355a62d9c918a17ce7d5f1372e07689254b58dbdbf67e6e39cb4317afe8 Nov 23 07:01:29 crc kubenswrapper[4681]: I1123 07:01:29.234256 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:01:29 crc kubenswrapper[4681]: I1123 07:01:29.268220 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462b970b-ce7f-444d-840e-3117d130e01c" path="/var/lib/kubelet/pods/462b970b-ce7f-444d-840e-3117d130e01c/volumes" Nov 23 07:01:29 crc kubenswrapper[4681]: I1123 07:01:29.371849 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerStarted","Data":"003df355a62d9c918a17ce7d5f1372e07689254b58dbdbf67e6e39cb4317afe8"} Nov 23 07:01:30 crc kubenswrapper[4681]: I1123 07:01:30.386042 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerStarted","Data":"a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1"} Nov 23 07:01:31 crc kubenswrapper[4681]: I1123 07:01:31.408807 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerStarted","Data":"05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af"} Nov 23 07:01:32 crc kubenswrapper[4681]: I1123 07:01:32.419677 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerStarted","Data":"2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3"} Nov 23 07:01:33 crc kubenswrapper[4681]: I1123 07:01:33.431816 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerStarted","Data":"1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6"} Nov 23 07:01:33 crc kubenswrapper[4681]: I1123 07:01:33.432858 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:01:33 crc kubenswrapper[4681]: I1123 07:01:33.459334 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.116762764 podStartE2EDuration="5.459286016s" podCreationTimestamp="2025-11-23 07:01:28 +0000 UTC" firstStartedPulling="2025-11-23 07:01:29.228102197 +0000 UTC m=+1026.297611434" lastFinishedPulling="2025-11-23 07:01:32.570625449 +0000 UTC m=+1029.640134686" observedRunningTime="2025-11-23 07:01:33.45263683 +0000 UTC m=+1030.522146067" watchObservedRunningTime="2025-11-23 07:01:33.459286016 +0000 UTC m=+1030.528795253" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.138662 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.276645 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21819725-3a3a-448c-8bda-e78701b78360-logs\") pod \"21819725-3a3a-448c-8bda-e78701b78360\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.277218 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-config-data\") pod \"21819725-3a3a-448c-8bda-e78701b78360\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.278033 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-secret-key\") pod \"21819725-3a3a-448c-8bda-e78701b78360\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.278441 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21819725-3a3a-448c-8bda-e78701b78360-logs" (OuterVolumeSpecName: "logs") pod "21819725-3a3a-448c-8bda-e78701b78360" (UID: "21819725-3a3a-448c-8bda-e78701b78360"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.278676 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-combined-ca-bundle\") pod \"21819725-3a3a-448c-8bda-e78701b78360\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.278786 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-tls-certs\") pod \"21819725-3a3a-448c-8bda-e78701b78360\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.278870 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swcf8\" (UniqueName: \"kubernetes.io/projected/21819725-3a3a-448c-8bda-e78701b78360-kube-api-access-swcf8\") pod \"21819725-3a3a-448c-8bda-e78701b78360\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.278928 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-scripts\") pod \"21819725-3a3a-448c-8bda-e78701b78360\" (UID: \"21819725-3a3a-448c-8bda-e78701b78360\") " Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.280715 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21819725-3a3a-448c-8bda-e78701b78360-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.284579 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "21819725-3a3a-448c-8bda-e78701b78360" (UID: "21819725-3a3a-448c-8bda-e78701b78360"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.302984 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21819725-3a3a-448c-8bda-e78701b78360-kube-api-access-swcf8" (OuterVolumeSpecName: "kube-api-access-swcf8") pod "21819725-3a3a-448c-8bda-e78701b78360" (UID: "21819725-3a3a-448c-8bda-e78701b78360"). InnerVolumeSpecName "kube-api-access-swcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.308097 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21819725-3a3a-448c-8bda-e78701b78360" (UID: "21819725-3a3a-448c-8bda-e78701b78360"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.311586 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-config-data" (OuterVolumeSpecName: "config-data") pod "21819725-3a3a-448c-8bda-e78701b78360" (UID: "21819725-3a3a-448c-8bda-e78701b78360"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.319306 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-scripts" (OuterVolumeSpecName: "scripts") pod "21819725-3a3a-448c-8bda-e78701b78360" (UID: "21819725-3a3a-448c-8bda-e78701b78360"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.327810 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "21819725-3a3a-448c-8bda-e78701b78360" (UID: "21819725-3a3a-448c-8bda-e78701b78360"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.382048 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.382168 4681 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.382252 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.382310 4681 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/21819725-3a3a-448c-8bda-e78701b78360-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.382362 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swcf8\" (UniqueName: \"kubernetes.io/projected/21819725-3a3a-448c-8bda-e78701b78360-kube-api-access-swcf8\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.382426 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21819725-3a3a-448c-8bda-e78701b78360-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.451816 4681 generic.go:334] "Generic (PLEG): container finished" podID="21819725-3a3a-448c-8bda-e78701b78360" containerID="31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903" exitCode=137 Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.451877 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c48d564b8-5tf9h" event={"ID":"21819725-3a3a-448c-8bda-e78701b78360","Type":"ContainerDied","Data":"31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903"} Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.451935 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c48d564b8-5tf9h" event={"ID":"21819725-3a3a-448c-8bda-e78701b78360","Type":"ContainerDied","Data":"9bd64d993c1959bd532de5ce79ec6ecd4e771d56ba852e7dc4478ed5ae91185a"} Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.451959 4681 scope.go:117] "RemoveContainer" containerID="f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.452109 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c48d564b8-5tf9h" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.488994 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c48d564b8-5tf9h"] Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.495000 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c48d564b8-5tf9h"] Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.640579 4681 scope.go:117] "RemoveContainer" containerID="31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.660444 4681 scope.go:117] "RemoveContainer" containerID="f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad" Nov 23 07:01:35 crc kubenswrapper[4681]: E1123 07:01:35.660872 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad\": container with ID starting with f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad not found: ID does not exist" containerID="f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.660914 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad"} err="failed to get container status \"f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad\": rpc error: code = NotFound desc = could not find container \"f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad\": container with ID starting with f3d5a2229e581dacb0c110eea06b591475ee0f36e81c8e0364256d3b3c1f60ad not found: ID does not exist" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.660961 4681 scope.go:117] "RemoveContainer" containerID="31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903" Nov 23 07:01:35 crc kubenswrapper[4681]: E1123 07:01:35.661397 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903\": container with ID starting with 31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903 not found: ID does not exist" containerID="31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903" Nov 23 07:01:35 crc kubenswrapper[4681]: I1123 07:01:35.661452 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903"} err="failed to get container status \"31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903\": rpc error: code = NotFound desc = could not find container \"31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903\": container with ID starting with 31c36592291e4d69d502aece2f0eb1b359b46e5ebc3744ea86b0b18dcdc77903 not found: ID does not exist" Nov 23 07:01:36 crc kubenswrapper[4681]: I1123 07:01:36.751172 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.228393 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-n5wff"] Nov 23 07:01:37 crc kubenswrapper[4681]: E1123 07:01:37.229064 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon-log" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.229083 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon-log" Nov 23 07:01:37 crc kubenswrapper[4681]: E1123 07:01:37.229107 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.229114 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.229253 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon-log" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.229285 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="21819725-3a3a-448c-8bda-e78701b78360" containerName="horizon" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.229896 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.232176 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.232630 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.242667 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-n5wff"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.267163 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21819725-3a3a-448c-8bda-e78701b78360" path="/var/lib/kubelet/pods/21819725-3a3a-448c-8bda-e78701b78360/volumes" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.319679 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.319775 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklfk\" (UniqueName: \"kubernetes.io/projected/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-kube-api-access-pklfk\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.319802 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-scripts\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.319878 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-config-data\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.356738 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.358479 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.362874 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.380357 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.420784 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-config-data\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.420831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.420893 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklfk\" (UniqueName: \"kubernetes.io/projected/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-kube-api-access-pklfk\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.420917 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-scripts\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.420972 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nmg\" (UniqueName: \"kubernetes.io/projected/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-kube-api-access-k4nmg\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.421009 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-config-data\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.421029 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.431250 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.431529 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-config-data\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.431835 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-scripts\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.457926 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklfk\" (UniqueName: \"kubernetes.io/projected/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-kube-api-access-pklfk\") pod \"nova-cell0-cell-mapping-n5wff\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.513096 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.514556 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.523885 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.534142 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4nmg\" (UniqueName: \"kubernetes.io/projected/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-kube-api-access-k4nmg\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.534205 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.534259 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfb8n\" (UniqueName: \"kubernetes.io/projected/1b15ac91-9e57-4a6a-95df-49c853fcbb12-kube-api-access-cfb8n\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.534325 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.534437 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-config-data\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.534557 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.535117 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.536673 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.542268 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.549476 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-config-data\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.550047 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.553806 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.581056 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4nmg\" (UniqueName: \"kubernetes.io/projected/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-kube-api-access-k4nmg\") pod \"nova-scheduler-0\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.621720 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.639833 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfb8n\" (UniqueName: \"kubernetes.io/projected/1b15ac91-9e57-4a6a-95df-49c853fcbb12-kube-api-access-cfb8n\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.640178 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.640403 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.666142 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.673079 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.685004 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfb8n\" (UniqueName: \"kubernetes.io/projected/1b15ac91-9e57-4a6a-95df-49c853fcbb12-kube-api-access-cfb8n\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.694438 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.694933 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.719420 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.721026 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.723281 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744047 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-config-data\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744141 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744165 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8c8397-3882-47df-9ba8-47f43dfed573-logs\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744186 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd8qp\" (UniqueName: \"kubernetes.io/projected/0b8c8397-3882-47df-9ba8-47f43dfed573-kube-api-access-pd8qp\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744207 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-config-data\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744252 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744290 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47hzj\" (UniqueName: \"kubernetes.io/projected/37028da3-dcde-4435-8a68-63d6cee55257-kube-api-access-47hzj\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.744358 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37028da3-dcde-4435-8a68-63d6cee55257-logs\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.769171 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.809998 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.836644 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff89994d9-cs2z8"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.841896 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.846090 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-svc\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.846833 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37028da3-dcde-4435-8a68-63d6cee55257-logs\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.846931 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-config-data\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847038 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847068 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lb9p\" (UniqueName: \"kubernetes.io/projected/759adc42-6c4c-4c47-b7d7-ec5eef16623a-kube-api-access-9lb9p\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847127 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847144 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8c8397-3882-47df-9ba8-47f43dfed573-logs\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847189 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd8qp\" (UniqueName: \"kubernetes.io/projected/0b8c8397-3882-47df-9ba8-47f43dfed573-kube-api-access-pd8qp\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847214 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847259 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-config-data\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847286 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847386 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847496 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47hzj\" (UniqueName: \"kubernetes.io/projected/37028da3-dcde-4435-8a68-63d6cee55257-kube-api-access-47hzj\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.847523 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-config\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.848244 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37028da3-dcde-4435-8a68-63d6cee55257-logs\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.854349 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8c8397-3882-47df-9ba8-47f43dfed573-logs\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.855130 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-config-data\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.855430 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.861358 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-config-data\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.861440 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff89994d9-cs2z8"] Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.862988 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.874640 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47hzj\" (UniqueName: \"kubernetes.io/projected/37028da3-dcde-4435-8a68-63d6cee55257-kube-api-access-47hzj\") pod \"nova-metadata-0\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " pod="openstack/nova-metadata-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.923581 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd8qp\" (UniqueName: \"kubernetes.io/projected/0b8c8397-3882-47df-9ba8-47f43dfed573-kube-api-access-pd8qp\") pod \"nova-api-0\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " pod="openstack/nova-api-0" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.949215 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-config\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.949437 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-svc\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.949550 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.949574 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lb9p\" (UniqueName: \"kubernetes.io/projected/759adc42-6c4c-4c47-b7d7-ec5eef16623a-kube-api-access-9lb9p\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.949612 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.949645 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.950372 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.950860 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-svc\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.951350 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.952318 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.954375 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-config\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:37 crc kubenswrapper[4681]: I1123 07:01:37.991136 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lb9p\" (UniqueName: \"kubernetes.io/projected/759adc42-6c4c-4c47-b7d7-ec5eef16623a-kube-api-access-9lb9p\") pod \"dnsmasq-dns-6ff89994d9-cs2z8\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.006814 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.041015 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.225507 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.442645 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-n5wff"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.452506 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.528564 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n5wff" event={"ID":"1f0f8d82-e774-42f2-b0a4-df7abb1ce348","Type":"ContainerStarted","Data":"b056eb5be76ea6ac33a8584320a5bc85ebaa76d4522f29e1f4d011a6977aaa11"} Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.531690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b15ac91-9e57-4a6a-95df-49c853fcbb12","Type":"ContainerStarted","Data":"6fb72057016eb6c31e5c4c87319f463e3cece1e7bdfe45260cf1ab1f955f08bf"} Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.533954 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.713067 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.728733 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.766601 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sl56s"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.768741 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.772573 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.774003 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.780966 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.782061 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-scripts\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.782226 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnmd\" (UniqueName: \"kubernetes.io/projected/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-kube-api-access-dxnmd\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.782342 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.808531 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sl56s"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.845890 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff89994d9-cs2z8"] Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.883696 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.884065 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.884187 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-scripts\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.884275 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxnmd\" (UniqueName: \"kubernetes.io/projected/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-kube-api-access-dxnmd\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.887111 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.886843 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-scripts\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.887395 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:38 crc kubenswrapper[4681]: I1123 07:01:38.898544 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxnmd\" (UniqueName: \"kubernetes.io/projected/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-kube-api-access-dxnmd\") pod \"nova-cell1-conductor-db-sync-sl56s\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.115588 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.558414 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b8c8397-3882-47df-9ba8-47f43dfed573","Type":"ContainerStarted","Data":"9ba094801bf341badffb28d053483581d2e5e904b77e3524fbeb785d5e1b803f"} Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.562530 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n5wff" event={"ID":"1f0f8d82-e774-42f2-b0a4-df7abb1ce348","Type":"ContainerStarted","Data":"183b917c45087d9603a7ee2e288f12a5785273ec6893fe5851e96e98dbbba738"} Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.564430 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f","Type":"ContainerStarted","Data":"8fcc9b6e54f8717e83ed81b4215e24a0f6118489cee7a9a3d3095b16a1b0c1fb"} Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.567895 4681 generic.go:334] "Generic (PLEG): container finished" podID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerID="72063394ebbfc079b71a6c6320edfd1851027c67ff111bae146384a315332d8a" exitCode=0 Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.567937 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" event={"ID":"759adc42-6c4c-4c47-b7d7-ec5eef16623a","Type":"ContainerDied","Data":"72063394ebbfc079b71a6c6320edfd1851027c67ff111bae146384a315332d8a"} Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.567954 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" event={"ID":"759adc42-6c4c-4c47-b7d7-ec5eef16623a","Type":"ContainerStarted","Data":"a74699858628156d6d4f8bfff5c36cb95953baf11641317842c447f95c8c5dda"} Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.572678 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37028da3-dcde-4435-8a68-63d6cee55257","Type":"ContainerStarted","Data":"6c04f4f9a3751740471c4c19779c1082364b83bb46d921ef4553bf4c43bc4e50"} Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.589927 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-n5wff" podStartSLOduration=2.5899158939999998 podStartE2EDuration="2.589915894s" podCreationTimestamp="2025-11-23 07:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:39.585220902 +0000 UTC m=+1036.654730129" watchObservedRunningTime="2025-11-23 07:01:39.589915894 +0000 UTC m=+1036.659425131" Nov 23 07:01:39 crc kubenswrapper[4681]: I1123 07:01:39.621561 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sl56s"] Nov 23 07:01:40 crc kubenswrapper[4681]: I1123 07:01:40.584201 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sl56s" event={"ID":"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb","Type":"ContainerStarted","Data":"78ebc7f0561d1808407ceae2ed83d51282c96af68e61a7baf50feba9c9957090"} Nov 23 07:01:40 crc kubenswrapper[4681]: I1123 07:01:40.584518 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sl56s" event={"ID":"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb","Type":"ContainerStarted","Data":"f3ae09ecf5d74e7cfcdac2275ab1e00582bd02d25ed5f8e50f5379ae0a067431"} Nov 23 07:01:40 crc kubenswrapper[4681]: I1123 07:01:40.604217 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" event={"ID":"759adc42-6c4c-4c47-b7d7-ec5eef16623a","Type":"ContainerStarted","Data":"2d0a328ee937bb96b94c26952a1e44237421a7c7f3b82fa554bde87ac7408a75"} Nov 23 07:01:40 crc kubenswrapper[4681]: I1123 07:01:40.633743 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" podStartSLOduration=3.6337193020000003 podStartE2EDuration="3.633719302s" podCreationTimestamp="2025-11-23 07:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:40.630853659 +0000 UTC m=+1037.700362896" watchObservedRunningTime="2025-11-23 07:01:40.633719302 +0000 UTC m=+1037.703228538" Nov 23 07:01:40 crc kubenswrapper[4681]: I1123 07:01:40.636691 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-sl56s" podStartSLOduration=2.636682698 podStartE2EDuration="2.636682698s" podCreationTimestamp="2025-11-23 07:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:40.607307378 +0000 UTC m=+1037.676816616" watchObservedRunningTime="2025-11-23 07:01:40.636682698 +0000 UTC m=+1037.706191935" Nov 23 07:01:40 crc kubenswrapper[4681]: I1123 07:01:40.968303 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:41 crc kubenswrapper[4681]: I1123 07:01:41.003744 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:01:41 crc kubenswrapper[4681]: I1123 07:01:41.625122 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.666881 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f","Type":"ContainerStarted","Data":"e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8"} Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.670733 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b15ac91-9e57-4a6a-95df-49c853fcbb12","Type":"ContainerStarted","Data":"2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6"} Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.670783 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1b15ac91-9e57-4a6a-95df-49c853fcbb12" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6" gracePeriod=30 Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.678801 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37028da3-dcde-4435-8a68-63d6cee55257","Type":"ContainerStarted","Data":"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a"} Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.678847 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37028da3-dcde-4435-8a68-63d6cee55257","Type":"ContainerStarted","Data":"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a"} Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.678941 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-log" containerID="cri-o://f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a" gracePeriod=30 Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.688071 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-metadata" containerID="cri-o://fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a" gracePeriod=30 Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.692308 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.706772582 podStartE2EDuration="6.692293194s" podCreationTimestamp="2025-11-23 07:01:37 +0000 UTC" firstStartedPulling="2025-11-23 07:01:38.551181882 +0000 UTC m=+1035.620691119" lastFinishedPulling="2025-11-23 07:01:42.536702494 +0000 UTC m=+1039.606211731" observedRunningTime="2025-11-23 07:01:43.692045367 +0000 UTC m=+1040.761554604" watchObservedRunningTime="2025-11-23 07:01:43.692293194 +0000 UTC m=+1040.761802431" Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.699078 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b8c8397-3882-47df-9ba8-47f43dfed573","Type":"ContainerStarted","Data":"72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646"} Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.699107 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b8c8397-3882-47df-9ba8-47f43dfed573","Type":"ContainerStarted","Data":"fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc"} Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.714617 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.935604797 podStartE2EDuration="6.714591525s" podCreationTimestamp="2025-11-23 07:01:37 +0000 UTC" firstStartedPulling="2025-11-23 07:01:38.756222873 +0000 UTC m=+1035.825732110" lastFinishedPulling="2025-11-23 07:01:42.535209601 +0000 UTC m=+1039.604718838" observedRunningTime="2025-11-23 07:01:43.710068713 +0000 UTC m=+1040.779577939" watchObservedRunningTime="2025-11-23 07:01:43.714591525 +0000 UTC m=+1040.784100762" Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.725090 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.620487047 podStartE2EDuration="6.725075021s" podCreationTimestamp="2025-11-23 07:01:37 +0000 UTC" firstStartedPulling="2025-11-23 07:01:38.429168558 +0000 UTC m=+1035.498677795" lastFinishedPulling="2025-11-23 07:01:42.533756532 +0000 UTC m=+1039.603265769" observedRunningTime="2025-11-23 07:01:43.721971103 +0000 UTC m=+1040.791480329" watchObservedRunningTime="2025-11-23 07:01:43.725075021 +0000 UTC m=+1040.794584258" Nov 23 07:01:43 crc kubenswrapper[4681]: I1123 07:01:43.740133 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.917076715 podStartE2EDuration="6.740122989s" podCreationTimestamp="2025-11-23 07:01:37 +0000 UTC" firstStartedPulling="2025-11-23 07:01:38.718807706 +0000 UTC m=+1035.788316934" lastFinishedPulling="2025-11-23 07:01:42.541853971 +0000 UTC m=+1039.611363208" observedRunningTime="2025-11-23 07:01:43.736705218 +0000 UTC m=+1040.806214455" watchObservedRunningTime="2025-11-23 07:01:43.740122989 +0000 UTC m=+1040.809632225" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.291244 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.349081 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47hzj\" (UniqueName: \"kubernetes.io/projected/37028da3-dcde-4435-8a68-63d6cee55257-kube-api-access-47hzj\") pod \"37028da3-dcde-4435-8a68-63d6cee55257\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.349160 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37028da3-dcde-4435-8a68-63d6cee55257-logs\") pod \"37028da3-dcde-4435-8a68-63d6cee55257\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.349194 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-combined-ca-bundle\") pod \"37028da3-dcde-4435-8a68-63d6cee55257\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.349648 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37028da3-dcde-4435-8a68-63d6cee55257-logs" (OuterVolumeSpecName: "logs") pod "37028da3-dcde-4435-8a68-63d6cee55257" (UID: "37028da3-dcde-4435-8a68-63d6cee55257"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.349746 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-config-data\") pod \"37028da3-dcde-4435-8a68-63d6cee55257\" (UID: \"37028da3-dcde-4435-8a68-63d6cee55257\") " Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.350240 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37028da3-dcde-4435-8a68-63d6cee55257-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.361251 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37028da3-dcde-4435-8a68-63d6cee55257-kube-api-access-47hzj" (OuterVolumeSpecName: "kube-api-access-47hzj") pod "37028da3-dcde-4435-8a68-63d6cee55257" (UID: "37028da3-dcde-4435-8a68-63d6cee55257"). InnerVolumeSpecName "kube-api-access-47hzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.377418 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-config-data" (OuterVolumeSpecName: "config-data") pod "37028da3-dcde-4435-8a68-63d6cee55257" (UID: "37028da3-dcde-4435-8a68-63d6cee55257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.399143 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37028da3-dcde-4435-8a68-63d6cee55257" (UID: "37028da3-dcde-4435-8a68-63d6cee55257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.451968 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.452017 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37028da3-dcde-4435-8a68-63d6cee55257-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.452028 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47hzj\" (UniqueName: \"kubernetes.io/projected/37028da3-dcde-4435-8a68-63d6cee55257-kube-api-access-47hzj\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.711961 4681 generic.go:334] "Generic (PLEG): container finished" podID="37028da3-dcde-4435-8a68-63d6cee55257" containerID="fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a" exitCode=0 Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.712016 4681 generic.go:334] "Generic (PLEG): container finished" podID="37028da3-dcde-4435-8a68-63d6cee55257" containerID="f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a" exitCode=143 Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.712018 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37028da3-dcde-4435-8a68-63d6cee55257","Type":"ContainerDied","Data":"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a"} Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.712056 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.712173 4681 scope.go:117] "RemoveContainer" containerID="fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.714727 4681 generic.go:334] "Generic (PLEG): container finished" podID="b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" containerID="78ebc7f0561d1808407ceae2ed83d51282c96af68e61a7baf50feba9c9957090" exitCode=0 Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.713314 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37028da3-dcde-4435-8a68-63d6cee55257","Type":"ContainerDied","Data":"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a"} Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.715707 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37028da3-dcde-4435-8a68-63d6cee55257","Type":"ContainerDied","Data":"6c04f4f9a3751740471c4c19779c1082364b83bb46d921ef4553bf4c43bc4e50"} Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.715722 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sl56s" event={"ID":"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb","Type":"ContainerDied","Data":"78ebc7f0561d1808407ceae2ed83d51282c96af68e61a7baf50feba9c9957090"} Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.734096 4681 scope.go:117] "RemoveContainer" containerID="f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.761955 4681 scope.go:117] "RemoveContainer" containerID="fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a" Nov 23 07:01:44 crc kubenswrapper[4681]: E1123 07:01:44.762785 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a\": container with ID starting with fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a not found: ID does not exist" containerID="fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.762846 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a"} err="failed to get container status \"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a\": rpc error: code = NotFound desc = could not find container \"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a\": container with ID starting with fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a not found: ID does not exist" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.762889 4681 scope.go:117] "RemoveContainer" containerID="f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a" Nov 23 07:01:44 crc kubenswrapper[4681]: E1123 07:01:44.766377 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a\": container with ID starting with f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a not found: ID does not exist" containerID="f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.766484 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a"} err="failed to get container status \"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a\": rpc error: code = NotFound desc = could not find container \"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a\": container with ID starting with f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a not found: ID does not exist" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.766562 4681 scope.go:117] "RemoveContainer" containerID="fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.767090 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a"} err="failed to get container status \"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a\": rpc error: code = NotFound desc = could not find container \"fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a\": container with ID starting with fae375fe1b895930060d98175845a48277689d36e8a0d0f87c1017db4fc30f7a not found: ID does not exist" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.767116 4681 scope.go:117] "RemoveContainer" containerID="f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.774544 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a"} err="failed to get container status \"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a\": rpc error: code = NotFound desc = could not find container \"f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a\": container with ID starting with f2ae507c327455df1d66f3e09c88f0201d8a5d0acf0dae88c1faf9b47a1f433a not found: ID does not exist" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.785625 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.798663 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.812511 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:44 crc kubenswrapper[4681]: E1123 07:01:44.813187 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-metadata" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.813209 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-metadata" Nov 23 07:01:44 crc kubenswrapper[4681]: E1123 07:01:44.813227 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-log" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.813234 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-log" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.813544 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-metadata" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.813560 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="37028da3-dcde-4435-8a68-63d6cee55257" containerName="nova-metadata-log" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.814944 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.817981 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.818192 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.822510 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.961897 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.961993 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-config-data\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.962021 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f86a06af-5af9-4480-b325-2df7ad2db0ff-logs\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.962320 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:44 crc kubenswrapper[4681]: I1123 07:01:44.962420 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtz62\" (UniqueName: \"kubernetes.io/projected/f86a06af-5af9-4480-b325-2df7ad2db0ff-kube-api-access-mtz62\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.064599 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-config-data\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.064668 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f86a06af-5af9-4480-b325-2df7ad2db0ff-logs\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.064788 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.064826 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtz62\" (UniqueName: \"kubernetes.io/projected/f86a06af-5af9-4480-b325-2df7ad2db0ff-kube-api-access-mtz62\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.064890 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.066930 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f86a06af-5af9-4480-b325-2df7ad2db0ff-logs\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.072815 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.073297 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-config-data\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.076903 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.085863 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtz62\" (UniqueName: \"kubernetes.io/projected/f86a06af-5af9-4480-b325-2df7ad2db0ff-kube-api-access-mtz62\") pod \"nova-metadata-0\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.135618 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.283311 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37028da3-dcde-4435-8a68-63d6cee55257" path="/var/lib/kubelet/pods/37028da3-dcde-4435-8a68-63d6cee55257/volumes" Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.578066 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:45 crc kubenswrapper[4681]: I1123 07:01:45.727988 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f86a06af-5af9-4480-b325-2df7ad2db0ff","Type":"ContainerStarted","Data":"186c09e63ceb7c535043f98e9b34bfa07b73650eccd9f24579aba0cb604bd520"} Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.002068 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.199099 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-combined-ca-bundle\") pod \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.199249 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-scripts\") pod \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.199278 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxnmd\" (UniqueName: \"kubernetes.io/projected/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-kube-api-access-dxnmd\") pod \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.199724 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data\") pod \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.205677 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-scripts" (OuterVolumeSpecName: "scripts") pod "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" (UID: "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.208352 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-kube-api-access-dxnmd" (OuterVolumeSpecName: "kube-api-access-dxnmd") pod "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" (UID: "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb"). InnerVolumeSpecName "kube-api-access-dxnmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:46 crc kubenswrapper[4681]: E1123 07:01:46.230077 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data podName:b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb nodeName:}" failed. No retries permitted until 2025-11-23 07:01:46.730050006 +0000 UTC m=+1043.799559243 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data") pod "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" (UID: "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb") : error deleting /var/lib/kubelet/pods/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb/volume-subpaths: remove /var/lib/kubelet/pods/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb/volume-subpaths: no such file or directory Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.233379 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" (UID: "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.303842 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.304267 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.304693 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxnmd\" (UniqueName: \"kubernetes.io/projected/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-kube-api-access-dxnmd\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.741634 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sl56s" event={"ID":"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb","Type":"ContainerDied","Data":"f3ae09ecf5d74e7cfcdac2275ab1e00582bd02d25ed5f8e50f5379ae0a067431"} Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.741748 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ae09ecf5d74e7cfcdac2275ab1e00582bd02d25ed5f8e50f5379ae0a067431" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.741685 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sl56s" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.744682 4681 generic.go:334] "Generic (PLEG): container finished" podID="1f0f8d82-e774-42f2-b0a4-df7abb1ce348" containerID="183b917c45087d9603a7ee2e288f12a5785273ec6893fe5851e96e98dbbba738" exitCode=0 Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.744791 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n5wff" event={"ID":"1f0f8d82-e774-42f2-b0a4-df7abb1ce348","Type":"ContainerDied","Data":"183b917c45087d9603a7ee2e288f12a5785273ec6893fe5851e96e98dbbba738"} Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.750613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f86a06af-5af9-4480-b325-2df7ad2db0ff","Type":"ContainerStarted","Data":"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e"} Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.750683 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f86a06af-5af9-4480-b325-2df7ad2db0ff","Type":"ContainerStarted","Data":"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9"} Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.799169 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.799148707 podStartE2EDuration="2.799148707s" podCreationTimestamp="2025-11-23 07:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:46.793703695 +0000 UTC m=+1043.863212933" watchObservedRunningTime="2025-11-23 07:01:46.799148707 +0000 UTC m=+1043.868657944" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.810655 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data\") pod \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\" (UID: \"b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb\") " Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.814982 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data" (OuterVolumeSpecName: "config-data") pod "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" (UID: "b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.837648 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:01:46 crc kubenswrapper[4681]: E1123 07:01:46.838181 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" containerName="nova-cell1-conductor-db-sync" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.838200 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" containerName="nova-cell1-conductor-db-sync" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.838367 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" containerName="nova-cell1-conductor-db-sync" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.839084 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.849230 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:01:46 crc kubenswrapper[4681]: I1123 07:01:46.913921 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.015368 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.016328 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvhgt\" (UniqueName: \"kubernetes.io/projected/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-kube-api-access-qvhgt\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.016675 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.118253 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvhgt\" (UniqueName: \"kubernetes.io/projected/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-kube-api-access-qvhgt\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.118692 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.118758 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.125205 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.125452 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.136663 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvhgt\" (UniqueName: \"kubernetes.io/projected/e5e640b3-0dfe-449e-aec0-a2daa2a5ce25-kube-api-access-qvhgt\") pod \"nova-cell1-conductor-0\" (UID: \"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.167977 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.585535 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:01:47 crc kubenswrapper[4681]: W1123 07:01:47.590174 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e640b3_0dfe_449e_aec0_a2daa2a5ce25.slice/crio-10f829c63a20e259e3ab86fc465aec14cc062780b744f7ad9aca10676a359931 WatchSource:0}: Error finding container 10f829c63a20e259e3ab86fc465aec14cc062780b744f7ad9aca10676a359931: Status 404 returned error can't find the container with id 10f829c63a20e259e3ab86fc465aec14cc062780b744f7ad9aca10676a359931 Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.695785 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.695857 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.695876 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.725219 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.762952 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25","Type":"ContainerStarted","Data":"10f829c63a20e259e3ab86fc465aec14cc062780b744f7ad9aca10676a359931"} Nov 23 07:01:47 crc kubenswrapper[4681]: I1123 07:01:47.802663 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.042069 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.042125 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.134876 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.227701 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.265478 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-combined-ca-bundle\") pod \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.265711 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pklfk\" (UniqueName: \"kubernetes.io/projected/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-kube-api-access-pklfk\") pod \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.265827 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-scripts\") pod \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.265850 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-config-data\") pod \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\" (UID: \"1f0f8d82-e774-42f2-b0a4-df7abb1ce348\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.279770 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-kube-api-access-pklfk" (OuterVolumeSpecName: "kube-api-access-pklfk") pod "1f0f8d82-e774-42f2-b0a4-df7abb1ce348" (UID: "1f0f8d82-e774-42f2-b0a4-df7abb1ce348"). InnerVolumeSpecName "kube-api-access-pklfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.294684 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cfb689747-vscpn"] Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.294926 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cfb689747-vscpn" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerName="dnsmasq-dns" containerID="cri-o://380b6d44c65b94cb8300f25cdc2b9d551d69e23126a4faa107f6a923df8c4287" gracePeriod=10 Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.297439 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-scripts" (OuterVolumeSpecName: "scripts") pod "1f0f8d82-e774-42f2-b0a4-df7abb1ce348" (UID: "1f0f8d82-e774-42f2-b0a4-df7abb1ce348"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.324689 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f0f8d82-e774-42f2-b0a4-df7abb1ce348" (UID: "1f0f8d82-e774-42f2-b0a4-df7abb1ce348"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.341675 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-config-data" (OuterVolumeSpecName: "config-data") pod "1f0f8d82-e774-42f2-b0a4-df7abb1ce348" (UID: "1f0f8d82-e774-42f2-b0a4-df7abb1ce348"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.372255 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.372285 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pklfk\" (UniqueName: \"kubernetes.io/projected/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-kube-api-access-pklfk\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.372299 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.372308 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0f8d82-e774-42f2-b0a4-df7abb1ce348-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.513134 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cfb689747-vscpn" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.180:5353: connect: connection refused" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.800546 4681 generic.go:334] "Generic (PLEG): container finished" podID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerID="380b6d44c65b94cb8300f25cdc2b9d551d69e23126a4faa107f6a923df8c4287" exitCode=0 Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.800626 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cfb689747-vscpn" event={"ID":"94898d17-ee5b-4035-aff2-db846fcfa5f7","Type":"ContainerDied","Data":"380b6d44c65b94cb8300f25cdc2b9d551d69e23126a4faa107f6a923df8c4287"} Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.800663 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cfb689747-vscpn" event={"ID":"94898d17-ee5b-4035-aff2-db846fcfa5f7","Type":"ContainerDied","Data":"4726dbe1e5d2954e387c8083fec70624ee0daa9523be1be33e56f32861ab0a88"} Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.800675 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4726dbe1e5d2954e387c8083fec70624ee0daa9523be1be33e56f32861ab0a88" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.817329 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5e640b3-0dfe-449e-aec0-a2daa2a5ce25","Type":"ContainerStarted","Data":"9466b9bea88f66060a728b3d08ee4bed060324e8c9a740544e72beaf8bf9519c"} Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.818128 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.821512 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n5wff" event={"ID":"1f0f8d82-e774-42f2-b0a4-df7abb1ce348","Type":"ContainerDied","Data":"b056eb5be76ea6ac33a8584320a5bc85ebaa76d4522f29e1f4d011a6977aaa11"} Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.821553 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n5wff" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.821553 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b056eb5be76ea6ac33a8584320a5bc85ebaa76d4522f29e1f4d011a6977aaa11" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.850132 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.850105838 podStartE2EDuration="2.850105838s" podCreationTimestamp="2025-11-23 07:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:48.836659588 +0000 UTC m=+1045.906168825" watchObservedRunningTime="2025-11-23 07:01:48.850105838 +0000 UTC m=+1045.919615075" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.868197 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.895601 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-swift-storage-0\") pod \"94898d17-ee5b-4035-aff2-db846fcfa5f7\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.895890 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-sb\") pod \"94898d17-ee5b-4035-aff2-db846fcfa5f7\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.896014 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qkzc\" (UniqueName: \"kubernetes.io/projected/94898d17-ee5b-4035-aff2-db846fcfa5f7-kube-api-access-5qkzc\") pod \"94898d17-ee5b-4035-aff2-db846fcfa5f7\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.896213 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-config\") pod \"94898d17-ee5b-4035-aff2-db846fcfa5f7\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.896293 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-svc\") pod \"94898d17-ee5b-4035-aff2-db846fcfa5f7\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.896425 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-nb\") pod \"94898d17-ee5b-4035-aff2-db846fcfa5f7\" (UID: \"94898d17-ee5b-4035-aff2-db846fcfa5f7\") " Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.936734 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94898d17-ee5b-4035-aff2-db846fcfa5f7-kube-api-access-5qkzc" (OuterVolumeSpecName: "kube-api-access-5qkzc") pod "94898d17-ee5b-4035-aff2-db846fcfa5f7" (UID: "94898d17-ee5b-4035-aff2-db846fcfa5f7"). InnerVolumeSpecName "kube-api-access-5qkzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.989792 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:48 crc kubenswrapper[4681]: I1123 07:01:48.999006 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qkzc\" (UniqueName: \"kubernetes.io/projected/94898d17-ee5b-4035-aff2-db846fcfa5f7-kube-api-access-5qkzc\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.005536 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94898d17-ee5b-4035-aff2-db846fcfa5f7" (UID: "94898d17-ee5b-4035-aff2-db846fcfa5f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.012683 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.012920 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-log" containerID="cri-o://fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc" gracePeriod=30 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.013356 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-api" containerID="cri-o://72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646" gracePeriod=30 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.022049 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": EOF" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.022400 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": EOF" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.035654 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94898d17-ee5b-4035-aff2-db846fcfa5f7" (UID: "94898d17-ee5b-4035-aff2-db846fcfa5f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.045368 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.045618 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-log" containerID="cri-o://e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9" gracePeriod=30 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.045811 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-metadata" containerID="cri-o://4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e" gracePeriod=30 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.063314 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-config" (OuterVolumeSpecName: "config") pod "94898d17-ee5b-4035-aff2-db846fcfa5f7" (UID: "94898d17-ee5b-4035-aff2-db846fcfa5f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.081303 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94898d17-ee5b-4035-aff2-db846fcfa5f7" (UID: "94898d17-ee5b-4035-aff2-db846fcfa5f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.087354 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "94898d17-ee5b-4035-aff2-db846fcfa5f7" (UID: "94898d17-ee5b-4035-aff2-db846fcfa5f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.104826 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.104950 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.105037 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.105124 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.105197 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94898d17-ee5b-4035-aff2-db846fcfa5f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.542801 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.612233 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-config-data\") pod \"f86a06af-5af9-4480-b325-2df7ad2db0ff\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.612278 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f86a06af-5af9-4480-b325-2df7ad2db0ff-logs\") pod \"f86a06af-5af9-4480-b325-2df7ad2db0ff\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.612315 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-combined-ca-bundle\") pod \"f86a06af-5af9-4480-b325-2df7ad2db0ff\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.612388 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtz62\" (UniqueName: \"kubernetes.io/projected/f86a06af-5af9-4480-b325-2df7ad2db0ff-kube-api-access-mtz62\") pod \"f86a06af-5af9-4480-b325-2df7ad2db0ff\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.612427 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-nova-metadata-tls-certs\") pod \"f86a06af-5af9-4480-b325-2df7ad2db0ff\" (UID: \"f86a06af-5af9-4480-b325-2df7ad2db0ff\") " Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.613178 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f86a06af-5af9-4480-b325-2df7ad2db0ff-logs" (OuterVolumeSpecName: "logs") pod "f86a06af-5af9-4480-b325-2df7ad2db0ff" (UID: "f86a06af-5af9-4480-b325-2df7ad2db0ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.615740 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86a06af-5af9-4480-b325-2df7ad2db0ff-kube-api-access-mtz62" (OuterVolumeSpecName: "kube-api-access-mtz62") pod "f86a06af-5af9-4480-b325-2df7ad2db0ff" (UID: "f86a06af-5af9-4480-b325-2df7ad2db0ff"). InnerVolumeSpecName "kube-api-access-mtz62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.649129 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-config-data" (OuterVolumeSpecName: "config-data") pod "f86a06af-5af9-4480-b325-2df7ad2db0ff" (UID: "f86a06af-5af9-4480-b325-2df7ad2db0ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.667483 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f86a06af-5af9-4480-b325-2df7ad2db0ff" (UID: "f86a06af-5af9-4480-b325-2df7ad2db0ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.695568 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f86a06af-5af9-4480-b325-2df7ad2db0ff" (UID: "f86a06af-5af9-4480-b325-2df7ad2db0ff"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.716028 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.716053 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f86a06af-5af9-4480-b325-2df7ad2db0ff-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.716064 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.716075 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtz62\" (UniqueName: \"kubernetes.io/projected/f86a06af-5af9-4480-b325-2df7ad2db0ff-kube-api-access-mtz62\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.716084 4681 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86a06af-5af9-4480-b325-2df7ad2db0ff-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.831142 4681 generic.go:334] "Generic (PLEG): container finished" podID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerID="fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc" exitCode=143 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.832060 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b8c8397-3882-47df-9ba8-47f43dfed573","Type":"ContainerDied","Data":"fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc"} Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.833771 4681 generic.go:334] "Generic (PLEG): container finished" podID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerID="4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e" exitCode=0 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.833878 4681 generic.go:334] "Generic (PLEG): container finished" podID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerID="e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9" exitCode=143 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.834151 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" containerName="nova-scheduler-scheduler" containerID="cri-o://e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" gracePeriod=30 Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.834577 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.838005 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cfb689747-vscpn" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.838025 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f86a06af-5af9-4480-b325-2df7ad2db0ff","Type":"ContainerDied","Data":"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e"} Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.838138 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f86a06af-5af9-4480-b325-2df7ad2db0ff","Type":"ContainerDied","Data":"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9"} Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.838153 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f86a06af-5af9-4480-b325-2df7ad2db0ff","Type":"ContainerDied","Data":"186c09e63ceb7c535043f98e9b34bfa07b73650eccd9f24579aba0cb604bd520"} Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.838223 4681 scope.go:117] "RemoveContainer" containerID="4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.868318 4681 scope.go:117] "RemoveContainer" containerID="e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.879844 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cfb689747-vscpn"] Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.893619 4681 scope.go:117] "RemoveContainer" containerID="4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e" Nov 23 07:01:49 crc kubenswrapper[4681]: E1123 07:01:49.918074 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e\": container with ID starting with 4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e not found: ID does not exist" containerID="4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.918137 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e"} err="failed to get container status \"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e\": rpc error: code = NotFound desc = could not find container \"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e\": container with ID starting with 4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e not found: ID does not exist" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.918185 4681 scope.go:117] "RemoveContainer" containerID="e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.918518 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cfb689747-vscpn"] Nov 23 07:01:49 crc kubenswrapper[4681]: E1123 07:01:49.920209 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9\": container with ID starting with e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9 not found: ID does not exist" containerID="e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.920262 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9"} err="failed to get container status \"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9\": rpc error: code = NotFound desc = could not find container \"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9\": container with ID starting with e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9 not found: ID does not exist" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.920297 4681 scope.go:117] "RemoveContainer" containerID="4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.921381 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e"} err="failed to get container status \"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e\": rpc error: code = NotFound desc = could not find container \"4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e\": container with ID starting with 4277629cbf931544b4d60d41d5f2fc4f6275503c16fc506ecfc21dc6c63d510e not found: ID does not exist" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.921434 4681 scope.go:117] "RemoveContainer" containerID="e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.924134 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9"} err="failed to get container status \"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9\": rpc error: code = NotFound desc = could not find container \"e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9\": container with ID starting with e37f551a7d65dafa3abe9878c4bb6824a07df2a764028de92c6b8110c93d29b9 not found: ID does not exist" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.934656 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.963501 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.971634 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:49 crc kubenswrapper[4681]: E1123 07:01:49.972093 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0f8d82-e774-42f2-b0a4-df7abb1ce348" containerName="nova-manage" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972105 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0f8d82-e774-42f2-b0a4-df7abb1ce348" containerName="nova-manage" Nov 23 07:01:49 crc kubenswrapper[4681]: E1123 07:01:49.972124 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerName="init" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972130 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerName="init" Nov 23 07:01:49 crc kubenswrapper[4681]: E1123 07:01:49.972147 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-log" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972155 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-log" Nov 23 07:01:49 crc kubenswrapper[4681]: E1123 07:01:49.972165 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerName="dnsmasq-dns" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972172 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerName="dnsmasq-dns" Nov 23 07:01:49 crc kubenswrapper[4681]: E1123 07:01:49.972184 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-metadata" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972189 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-metadata" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972378 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0f8d82-e774-42f2-b0a4-df7abb1ce348" containerName="nova-manage" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972394 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" containerName="dnsmasq-dns" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972401 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-log" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.972414 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" containerName="nova-metadata-metadata" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.973407 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.979998 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.980218 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:01:49 crc kubenswrapper[4681]: I1123 07:01:49.989129 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.020787 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.020844 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzcz\" (UniqueName: \"kubernetes.io/projected/4892cf49-6ef2-4d78-893a-fa0995817fb9-kube-api-access-2vzcz\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.020939 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-config-data\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.020984 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.021040 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4892cf49-6ef2-4d78-893a-fa0995817fb9-logs\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.122421 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.122494 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vzcz\" (UniqueName: \"kubernetes.io/projected/4892cf49-6ef2-4d78-893a-fa0995817fb9-kube-api-access-2vzcz\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.122561 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-config-data\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.122593 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.122629 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4892cf49-6ef2-4d78-893a-fa0995817fb9-logs\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.122990 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4892cf49-6ef2-4d78-893a-fa0995817fb9-logs\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.131044 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.140926 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vzcz\" (UniqueName: \"kubernetes.io/projected/4892cf49-6ef2-4d78-893a-fa0995817fb9-kube-api-access-2vzcz\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.140937 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.142511 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-config-data\") pod \"nova-metadata-0\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.300623 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:01:50 crc kubenswrapper[4681]: W1123 07:01:50.770714 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4892cf49_6ef2_4d78_893a_fa0995817fb9.slice/crio-fbca7a8a18e12d6eb72f7e00240b8c55371f540d5ae842c419201fbb0790293d WatchSource:0}: Error finding container fbca7a8a18e12d6eb72f7e00240b8c55371f540d5ae842c419201fbb0790293d: Status 404 returned error can't find the container with id fbca7a8a18e12d6eb72f7e00240b8c55371f540d5ae842c419201fbb0790293d Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.771999 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:01:50 crc kubenswrapper[4681]: I1123 07:01:50.847706 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4892cf49-6ef2-4d78-893a-fa0995817fb9","Type":"ContainerStarted","Data":"fbca7a8a18e12d6eb72f7e00240b8c55371f540d5ae842c419201fbb0790293d"} Nov 23 07:01:51 crc kubenswrapper[4681]: I1123 07:01:51.264901 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94898d17-ee5b-4035-aff2-db846fcfa5f7" path="/var/lib/kubelet/pods/94898d17-ee5b-4035-aff2-db846fcfa5f7/volumes" Nov 23 07:01:51 crc kubenswrapper[4681]: I1123 07:01:51.266218 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f86a06af-5af9-4480-b325-2df7ad2db0ff" path="/var/lib/kubelet/pods/f86a06af-5af9-4480-b325-2df7ad2db0ff/volumes" Nov 23 07:01:51 crc kubenswrapper[4681]: I1123 07:01:51.858151 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4892cf49-6ef2-4d78-893a-fa0995817fb9","Type":"ContainerStarted","Data":"97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887"} Nov 23 07:01:51 crc kubenswrapper[4681]: I1123 07:01:51.858206 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4892cf49-6ef2-4d78-893a-fa0995817fb9","Type":"ContainerStarted","Data":"786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80"} Nov 23 07:01:51 crc kubenswrapper[4681]: I1123 07:01:51.878860 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.878845662 podStartE2EDuration="2.878845662s" podCreationTimestamp="2025-11-23 07:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:51.876634465 +0000 UTC m=+1048.946143702" watchObservedRunningTime="2025-11-23 07:01:51.878845662 +0000 UTC m=+1048.948354899" Nov 23 07:01:52 crc kubenswrapper[4681]: I1123 07:01:52.197929 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 07:01:52 crc kubenswrapper[4681]: E1123 07:01:52.697517 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:01:52 crc kubenswrapper[4681]: E1123 07:01:52.699582 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:01:52 crc kubenswrapper[4681]: E1123 07:01:52.701065 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:01:52 crc kubenswrapper[4681]: E1123 07:01:52.701167 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" containerName="nova-scheduler-scheduler" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.404337 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.519799 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-combined-ca-bundle\") pod \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.520317 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-config-data\") pod \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.520447 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4nmg\" (UniqueName: \"kubernetes.io/projected/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-kube-api-access-k4nmg\") pod \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\" (UID: \"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f\") " Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.541573 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-kube-api-access-k4nmg" (OuterVolumeSpecName: "kube-api-access-k4nmg") pod "6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" (UID: "6f1a5523-438a-46bb-b55e-3d34d2ae1a4f"). InnerVolumeSpecName "kube-api-access-k4nmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.553899 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" (UID: "6f1a5523-438a-46bb-b55e-3d34d2ae1a4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.560078 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-config-data" (OuterVolumeSpecName: "config-data") pod "6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" (UID: "6f1a5523-438a-46bb-b55e-3d34d2ae1a4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.622845 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.622998 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4nmg\" (UniqueName: \"kubernetes.io/projected/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-kube-api-access-k4nmg\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.623062 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:54 crc kubenswrapper[4681]: E1123 07:01:54.631114 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b8c8397_3882_47df_9ba8_47f43dfed573.slice/crio-72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b8c8397_3882_47df_9ba8_47f43dfed573.slice/crio-conmon-72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.838623 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.895984 4681 generic.go:334] "Generic (PLEG): container finished" podID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerID="72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646" exitCode=0 Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.896131 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b8c8397-3882-47df-9ba8-47f43dfed573","Type":"ContainerDied","Data":"72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646"} Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.896186 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b8c8397-3882-47df-9ba8-47f43dfed573","Type":"ContainerDied","Data":"9ba094801bf341badffb28d053483581d2e5e904b77e3524fbeb785d5e1b803f"} Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.896209 4681 scope.go:117] "RemoveContainer" containerID="72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.896482 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.900021 4681 generic.go:334] "Generic (PLEG): container finished" podID="6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" containerID="e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" exitCode=0 Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.900092 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f","Type":"ContainerDied","Data":"e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8"} Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.900124 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f1a5523-438a-46bb-b55e-3d34d2ae1a4f","Type":"ContainerDied","Data":"8fcc9b6e54f8717e83ed81b4215e24a0f6118489cee7a9a3d3095b16a1b0c1fb"} Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.900204 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.919887 4681 scope.go:117] "RemoveContainer" containerID="fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.935580 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-config-data\") pod \"0b8c8397-3882-47df-9ba8-47f43dfed573\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.935878 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-combined-ca-bundle\") pod \"0b8c8397-3882-47df-9ba8-47f43dfed573\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.935933 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8c8397-3882-47df-9ba8-47f43dfed573-logs\") pod \"0b8c8397-3882-47df-9ba8-47f43dfed573\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.936033 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd8qp\" (UniqueName: \"kubernetes.io/projected/0b8c8397-3882-47df-9ba8-47f43dfed573-kube-api-access-pd8qp\") pod \"0b8c8397-3882-47df-9ba8-47f43dfed573\" (UID: \"0b8c8397-3882-47df-9ba8-47f43dfed573\") " Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.943908 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b8c8397-3882-47df-9ba8-47f43dfed573-logs" (OuterVolumeSpecName: "logs") pod "0b8c8397-3882-47df-9ba8-47f43dfed573" (UID: "0b8c8397-3882-47df-9ba8-47f43dfed573"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.944957 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b8c8397-3882-47df-9ba8-47f43dfed573-kube-api-access-pd8qp" (OuterVolumeSpecName: "kube-api-access-pd8qp") pod "0b8c8397-3882-47df-9ba8-47f43dfed573" (UID: "0b8c8397-3882-47df-9ba8-47f43dfed573"). InnerVolumeSpecName "kube-api-access-pd8qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.970898 4681 scope.go:117] "RemoveContainer" containerID="72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646" Nov 23 07:01:54 crc kubenswrapper[4681]: E1123 07:01:54.971324 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646\": container with ID starting with 72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646 not found: ID does not exist" containerID="72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.971359 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646"} err="failed to get container status \"72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646\": rpc error: code = NotFound desc = could not find container \"72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646\": container with ID starting with 72c9a75408f807371311bc93923eee84d8aad0b045a1f2c13a4f88efcb795646 not found: ID does not exist" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.971409 4681 scope.go:117] "RemoveContainer" containerID="fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc" Nov 23 07:01:54 crc kubenswrapper[4681]: E1123 07:01:54.972224 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc\": container with ID starting with fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc not found: ID does not exist" containerID="fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.972259 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc"} err="failed to get container status \"fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc\": rpc error: code = NotFound desc = could not find container \"fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc\": container with ID starting with fba6e20a5ec7aae0d6aae9a271232d472d8fc5b61d1a4100f21dcc9dad3132fc not found: ID does not exist" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.972278 4681 scope.go:117] "RemoveContainer" containerID="e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.973236 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-config-data" (OuterVolumeSpecName: "config-data") pod "0b8c8397-3882-47df-9ba8-47f43dfed573" (UID: "0b8c8397-3882-47df-9ba8-47f43dfed573"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.982733 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.986395 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b8c8397-3882-47df-9ba8-47f43dfed573" (UID: "0b8c8397-3882-47df-9ba8-47f43dfed573"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.997174 4681 scope.go:117] "RemoveContainer" containerID="e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.997498 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:54 crc kubenswrapper[4681]: E1123 07:01:54.998199 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8\": container with ID starting with e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8 not found: ID does not exist" containerID="e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8" Nov 23 07:01:54 crc kubenswrapper[4681]: I1123 07:01:54.998990 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8"} err="failed to get container status \"e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8\": rpc error: code = NotFound desc = could not find container \"e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8\": container with ID starting with e1b2279ce57a9a4d9659318ad5044c75cf561826b599d45f39f2b94b53cc2dc8 not found: ID does not exist" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.007779 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:55 crc kubenswrapper[4681]: E1123 07:01:55.008276 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-api" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.008298 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-api" Nov 23 07:01:55 crc kubenswrapper[4681]: E1123 07:01:55.008332 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" containerName="nova-scheduler-scheduler" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.008339 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" containerName="nova-scheduler-scheduler" Nov 23 07:01:55 crc kubenswrapper[4681]: E1123 07:01:55.008348 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-log" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.008355 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-log" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.008590 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-api" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.008615 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" containerName="nova-api-log" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.008635 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" containerName="nova-scheduler-scheduler" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.009522 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.011454 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.016024 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.040777 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd8qp\" (UniqueName: \"kubernetes.io/projected/0b8c8397-3882-47df-9ba8-47f43dfed573-kube-api-access-pd8qp\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.040816 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.040833 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8c8397-3882-47df-9ba8-47f43dfed573-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.040846 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8c8397-3882-47df-9ba8-47f43dfed573-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.143706 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2tgv\" (UniqueName: \"kubernetes.io/projected/baf0217f-0783-4d59-81bf-a745d255e69b-kube-api-access-g2tgv\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.143748 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-config-data\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.143778 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.230255 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.237818 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.247679 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2tgv\" (UniqueName: \"kubernetes.io/projected/baf0217f-0783-4d59-81bf-a745d255e69b-kube-api-access-g2tgv\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.247737 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-config-data\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.247773 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.263702 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.269057 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-config-data\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.270517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2tgv\" (UniqueName: \"kubernetes.io/projected/baf0217f-0783-4d59-81bf-a745d255e69b-kube-api-access-g2tgv\") pod \"nova-scheduler-0\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.277801 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b8c8397-3882-47df-9ba8-47f43dfed573" path="/var/lib/kubelet/pods/0b8c8397-3882-47df-9ba8-47f43dfed573/volumes" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.278749 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f1a5523-438a-46bb-b55e-3d34d2ae1a4f" path="/var/lib/kubelet/pods/6f1a5523-438a-46bb-b55e-3d34d2ae1a4f/volumes" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.279645 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.283159 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.288529 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.294235 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.301778 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.303679 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.324988 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.451793 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4359f19c-c1cc-4539-8010-40b4941cddbc-logs\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.451869 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssp7b\" (UniqueName: \"kubernetes.io/projected/4359f19c-c1cc-4539-8010-40b4941cddbc-kube-api-access-ssp7b\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.451979 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.452045 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-config-data\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.554563 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4359f19c-c1cc-4539-8010-40b4941cddbc-logs\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.554644 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssp7b\" (UniqueName: \"kubernetes.io/projected/4359f19c-c1cc-4539-8010-40b4941cddbc-kube-api-access-ssp7b\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.554735 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.554811 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-config-data\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.555304 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4359f19c-c1cc-4539-8010-40b4941cddbc-logs\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.559170 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-config-data\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.559865 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.569865 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssp7b\" (UniqueName: \"kubernetes.io/projected/4359f19c-c1cc-4539-8010-40b4941cddbc-kube-api-access-ssp7b\") pod \"nova-api-0\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.650804 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.759740 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:01:55 crc kubenswrapper[4681]: W1123 07:01:55.770297 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaf0217f_0783_4d59_81bf_a745d255e69b.slice/crio-c11ce482233dc601c122e48b395bcebf9a943de403db72a8e2e55199f920416e WatchSource:0}: Error finding container c11ce482233dc601c122e48b395bcebf9a943de403db72a8e2e55199f920416e: Status 404 returned error can't find the container with id c11ce482233dc601c122e48b395bcebf9a943de403db72a8e2e55199f920416e Nov 23 07:01:55 crc kubenswrapper[4681]: I1123 07:01:55.919759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baf0217f-0783-4d59-81bf-a745d255e69b","Type":"ContainerStarted","Data":"c11ce482233dc601c122e48b395bcebf9a943de403db72a8e2e55199f920416e"} Nov 23 07:01:56 crc kubenswrapper[4681]: I1123 07:01:56.089300 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:01:56 crc kubenswrapper[4681]: W1123 07:01:56.096926 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4359f19c_c1cc_4539_8010_40b4941cddbc.slice/crio-4708b48130b04d134f0138316e65b27ea9dc526f7d3b121a8d8d78a5aa469ea4 WatchSource:0}: Error finding container 4708b48130b04d134f0138316e65b27ea9dc526f7d3b121a8d8d78a5aa469ea4: Status 404 returned error can't find the container with id 4708b48130b04d134f0138316e65b27ea9dc526f7d3b121a8d8d78a5aa469ea4 Nov 23 07:01:56 crc kubenswrapper[4681]: I1123 07:01:56.937290 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baf0217f-0783-4d59-81bf-a745d255e69b","Type":"ContainerStarted","Data":"d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec"} Nov 23 07:01:56 crc kubenswrapper[4681]: I1123 07:01:56.940475 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4359f19c-c1cc-4539-8010-40b4941cddbc","Type":"ContainerStarted","Data":"0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c"} Nov 23 07:01:56 crc kubenswrapper[4681]: I1123 07:01:56.940506 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4359f19c-c1cc-4539-8010-40b4941cddbc","Type":"ContainerStarted","Data":"d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165"} Nov 23 07:01:56 crc kubenswrapper[4681]: I1123 07:01:56.940518 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4359f19c-c1cc-4539-8010-40b4941cddbc","Type":"ContainerStarted","Data":"4708b48130b04d134f0138316e65b27ea9dc526f7d3b121a8d8d78a5aa469ea4"} Nov 23 07:01:56 crc kubenswrapper[4681]: I1123 07:01:56.963576 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.963562056 podStartE2EDuration="2.963562056s" podCreationTimestamp="2025-11-23 07:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:56.959259368 +0000 UTC m=+1054.028768605" watchObservedRunningTime="2025-11-23 07:01:56.963562056 +0000 UTC m=+1054.033071294" Nov 23 07:01:56 crc kubenswrapper[4681]: I1123 07:01:56.988650 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.988618956 podStartE2EDuration="1.988618956s" podCreationTimestamp="2025-11-23 07:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:56.974150962 +0000 UTC m=+1054.043660198" watchObservedRunningTime="2025-11-23 07:01:56.988618956 +0000 UTC m=+1054.058128193" Nov 23 07:01:58 crc kubenswrapper[4681]: I1123 07:01:58.776567 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:02:00 crc kubenswrapper[4681]: I1123 07:02:00.301085 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:02:00 crc kubenswrapper[4681]: I1123 07:02:00.301532 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:02:00 crc kubenswrapper[4681]: I1123 07:02:00.326437 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:02:01 crc kubenswrapper[4681]: I1123 07:02:01.314648 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:01 crc kubenswrapper[4681]: I1123 07:02:01.314674 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:02 crc kubenswrapper[4681]: I1123 07:02:02.215163 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:02:02 crc kubenswrapper[4681]: I1123 07:02:02.215901 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e4a72c64-9f8e-4403-b7e6-d78132e69cec" containerName="kube-state-metrics" containerID="cri-o://2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f" gracePeriod=30 Nov 23 07:02:02 crc kubenswrapper[4681]: I1123 07:02:02.684085 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:02:02 crc kubenswrapper[4681]: I1123 07:02:02.831239 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhccv\" (UniqueName: \"kubernetes.io/projected/e4a72c64-9f8e-4403-b7e6-d78132e69cec-kube-api-access-dhccv\") pod \"e4a72c64-9f8e-4403-b7e6-d78132e69cec\" (UID: \"e4a72c64-9f8e-4403-b7e6-d78132e69cec\") " Nov 23 07:02:02 crc kubenswrapper[4681]: I1123 07:02:02.842121 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a72c64-9f8e-4403-b7e6-d78132e69cec-kube-api-access-dhccv" (OuterVolumeSpecName: "kube-api-access-dhccv") pod "e4a72c64-9f8e-4403-b7e6-d78132e69cec" (UID: "e4a72c64-9f8e-4403-b7e6-d78132e69cec"). InnerVolumeSpecName "kube-api-access-dhccv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:02 crc kubenswrapper[4681]: I1123 07:02:02.935575 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhccv\" (UniqueName: \"kubernetes.io/projected/e4a72c64-9f8e-4403-b7e6-d78132e69cec-kube-api-access-dhccv\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.011610 4681 generic.go:334] "Generic (PLEG): container finished" podID="e4a72c64-9f8e-4403-b7e6-d78132e69cec" containerID="2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f" exitCode=2 Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.011658 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4a72c64-9f8e-4403-b7e6-d78132e69cec","Type":"ContainerDied","Data":"2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f"} Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.011699 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4a72c64-9f8e-4403-b7e6-d78132e69cec","Type":"ContainerDied","Data":"a5c4f60dd2bdc8b600f11687a1349731e08ced99b4111489f5c2a241b24349e4"} Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.011716 4681 scope.go:117] "RemoveContainer" containerID="2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.011854 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.048057 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.049528 4681 scope.go:117] "RemoveContainer" containerID="2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f" Nov 23 07:02:03 crc kubenswrapper[4681]: E1123 07:02:03.050015 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f\": container with ID starting with 2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f not found: ID does not exist" containerID="2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.050056 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f"} err="failed to get container status \"2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f\": rpc error: code = NotFound desc = could not find container \"2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f\": container with ID starting with 2a6ffbe9c2e45f27d58a5d0cbd7df61b6170219e446c77a294b3407a8171fc6f not found: ID does not exist" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.055970 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.067615 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:02:03 crc kubenswrapper[4681]: E1123 07:02:03.068219 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a72c64-9f8e-4403-b7e6-d78132e69cec" containerName="kube-state-metrics" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.068243 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a72c64-9f8e-4403-b7e6-d78132e69cec" containerName="kube-state-metrics" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.068497 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a72c64-9f8e-4403-b7e6-d78132e69cec" containerName="kube-state-metrics" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.069517 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.071887 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.072100 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.078974 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.139669 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.139743 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.139825 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwq9f\" (UniqueName: \"kubernetes.io/projected/e4e326f0-ed0b-45ae-b771-45132298af15-kube-api-access-bwq9f\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.139905 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.242535 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwq9f\" (UniqueName: \"kubernetes.io/projected/e4e326f0-ed0b-45ae-b771-45132298af15-kube-api-access-bwq9f\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.242624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.242734 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.242776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.250072 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.251377 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.264884 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4e326f0-ed0b-45ae-b771-45132298af15-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.272440 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a72c64-9f8e-4403-b7e6-d78132e69cec" path="/var/lib/kubelet/pods/e4a72c64-9f8e-4403-b7e6-d78132e69cec/volumes" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.275134 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwq9f\" (UniqueName: \"kubernetes.io/projected/e4e326f0-ed0b-45ae-b771-45132298af15-kube-api-access-bwq9f\") pod \"kube-state-metrics-0\" (UID: \"e4e326f0-ed0b-45ae-b771-45132298af15\") " pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.389808 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.849373 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.892091 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.892526 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-central-agent" containerID="cri-o://a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1" gracePeriod=30 Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.892642 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-notification-agent" containerID="cri-o://05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af" gracePeriod=30 Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.892588 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="sg-core" containerID="cri-o://2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3" gracePeriod=30 Nov 23 07:02:03 crc kubenswrapper[4681]: I1123 07:02:03.892553 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="proxy-httpd" containerID="cri-o://1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6" gracePeriod=30 Nov 23 07:02:04 crc kubenswrapper[4681]: I1123 07:02:04.024375 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4e326f0-ed0b-45ae-b771-45132298af15","Type":"ContainerStarted","Data":"18c7f6049f6b76ed79ab5f2b06817e9af210c9fc231e1dd5cd844bc50cf855d0"} Nov 23 07:02:04 crc kubenswrapper[4681]: I1123 07:02:04.026921 4681 generic.go:334] "Generic (PLEG): container finished" podID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerID="2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3" exitCode=2 Nov 23 07:02:04 crc kubenswrapper[4681]: I1123 07:02:04.027050 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerDied","Data":"2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3"} Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.047058 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerDied","Data":"1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6"} Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.046998 4681 generic.go:334] "Generic (PLEG): container finished" podID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerID="1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6" exitCode=0 Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.047797 4681 generic.go:334] "Generic (PLEG): container finished" podID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerID="a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1" exitCode=0 Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.047862 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerDied","Data":"a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1"} Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.049908 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4e326f0-ed0b-45ae-b771-45132298af15","Type":"ContainerStarted","Data":"e56692acaa61e1eba9f8821578c0e9a08be22470079824a9fa7ce0fcfb6dcf30"} Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.050202 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.325774 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.372143 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.396698 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.107229646 podStartE2EDuration="2.396661489s" podCreationTimestamp="2025-11-23 07:02:03 +0000 UTC" firstStartedPulling="2025-11-23 07:02:03.859878135 +0000 UTC m=+1060.929387371" lastFinishedPulling="2025-11-23 07:02:04.149309977 +0000 UTC m=+1061.218819214" observedRunningTime="2025-11-23 07:02:05.078928906 +0000 UTC m=+1062.148438132" watchObservedRunningTime="2025-11-23 07:02:05.396661489 +0000 UTC m=+1062.466170727" Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.652424 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.652503 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:02:05 crc kubenswrapper[4681]: I1123 07:02:05.806821 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.005785 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-sg-core-conf-yaml\") pod \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.006078 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-scripts\") pod \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.006185 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-config-data\") pod \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.006278 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5vwk\" (UniqueName: \"kubernetes.io/projected/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-kube-api-access-b5vwk\") pod \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.006320 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-run-httpd\") pod \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.006372 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-combined-ca-bundle\") pod \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.006410 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-log-httpd\") pod \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\" (UID: \"5adbedf4-bd97-43af-a48b-b5e10ebff5b0\") " Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.007097 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5adbedf4-bd97-43af-a48b-b5e10ebff5b0" (UID: "5adbedf4-bd97-43af-a48b-b5e10ebff5b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.007394 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5adbedf4-bd97-43af-a48b-b5e10ebff5b0" (UID: "5adbedf4-bd97-43af-a48b-b5e10ebff5b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.031776 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-scripts" (OuterVolumeSpecName: "scripts") pod "5adbedf4-bd97-43af-a48b-b5e10ebff5b0" (UID: "5adbedf4-bd97-43af-a48b-b5e10ebff5b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.032565 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-kube-api-access-b5vwk" (OuterVolumeSpecName: "kube-api-access-b5vwk") pod "5adbedf4-bd97-43af-a48b-b5e10ebff5b0" (UID: "5adbedf4-bd97-43af-a48b-b5e10ebff5b0"). InnerVolumeSpecName "kube-api-access-b5vwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.040952 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5adbedf4-bd97-43af-a48b-b5e10ebff5b0" (UID: "5adbedf4-bd97-43af-a48b-b5e10ebff5b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.061634 4681 generic.go:334] "Generic (PLEG): container finished" podID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerID="05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af" exitCode=0 Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.063350 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.064016 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerDied","Data":"05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af"} Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.064060 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5adbedf4-bd97-43af-a48b-b5e10ebff5b0","Type":"ContainerDied","Data":"003df355a62d9c918a17ce7d5f1372e07689254b58dbdbf67e6e39cb4317afe8"} Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.064084 4681 scope.go:117] "RemoveContainer" containerID="1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.116934 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.116965 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.116977 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.116989 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5vwk\" (UniqueName: \"kubernetes.io/projected/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-kube-api-access-b5vwk\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.116999 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.122646 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.152533 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-config-data" (OuterVolumeSpecName: "config-data") pod "5adbedf4-bd97-43af-a48b-b5e10ebff5b0" (UID: "5adbedf4-bd97-43af-a48b-b5e10ebff5b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.153528 4681 scope.go:117] "RemoveContainer" containerID="2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.177116 4681 scope.go:117] "RemoveContainer" containerID="05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.180574 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5adbedf4-bd97-43af-a48b-b5e10ebff5b0" (UID: "5adbedf4-bd97-43af-a48b-b5e10ebff5b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.210319 4681 scope.go:117] "RemoveContainer" containerID="a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.219701 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.219733 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adbedf4-bd97-43af-a48b-b5e10ebff5b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.242407 4681 scope.go:117] "RemoveContainer" containerID="1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6" Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.242994 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6\": container with ID starting with 1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6 not found: ID does not exist" containerID="1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.243054 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6"} err="failed to get container status \"1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6\": rpc error: code = NotFound desc = could not find container \"1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6\": container with ID starting with 1fb61f1d35ca6134ec6c9f256ba80b65c4eea6d4f7f95e5dea58c74eb118d6e6 not found: ID does not exist" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.243097 4681 scope.go:117] "RemoveContainer" containerID="2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3" Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.243506 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3\": container with ID starting with 2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3 not found: ID does not exist" containerID="2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.243559 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3"} err="failed to get container status \"2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3\": rpc error: code = NotFound desc = could not find container \"2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3\": container with ID starting with 2e4d35097acf69c4484ccec2dbdd9b7be3c49401dd95618502befc485cadd1a3 not found: ID does not exist" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.243595 4681 scope.go:117] "RemoveContainer" containerID="05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af" Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.244250 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af\": container with ID starting with 05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af not found: ID does not exist" containerID="05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.244304 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af"} err="failed to get container status \"05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af\": rpc error: code = NotFound desc = could not find container \"05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af\": container with ID starting with 05486c5910d39a9ee7127374ddc4cbf3837e1b78c92f514f5a169140093384af not found: ID does not exist" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.244339 4681 scope.go:117] "RemoveContainer" containerID="a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1" Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.244725 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1\": container with ID starting with a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1 not found: ID does not exist" containerID="a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.244748 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1"} err="failed to get container status \"a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1\": rpc error: code = NotFound desc = could not find container \"a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1\": container with ID starting with a6e222594f4977c200ed4bf1e9723e4bac8df0eea3b24a21747416e475df6cf1 not found: ID does not exist" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.399815 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.407146 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.419719 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.420265 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-central-agent" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420288 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-central-agent" Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.420314 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="proxy-httpd" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420322 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="proxy-httpd" Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.420337 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-notification-agent" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420343 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-notification-agent" Nov 23 07:02:06 crc kubenswrapper[4681]: E1123 07:02:06.420360 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="sg-core" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420366 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="sg-core" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420571 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="sg-core" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420587 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="proxy-httpd" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420606 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-central-agent" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.420616 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" containerName="ceilometer-notification-agent" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.422519 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.424158 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.433269 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.433587 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.442972 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-run-httpd\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526187 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526215 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526246 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-config-data\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526270 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526302 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-scripts\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526334 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-log-httpd\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.526354 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqs69\" (UniqueName: \"kubernetes.io/projected/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-kube-api-access-rqs69\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628693 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-run-httpd\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628741 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628762 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628797 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-config-data\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628819 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628856 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-scripts\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628885 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-log-httpd\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.628904 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqs69\" (UniqueName: \"kubernetes.io/projected/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-kube-api-access-rqs69\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.629147 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-run-httpd\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.629644 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-log-httpd\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.632762 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.633584 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.634770 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-config-data\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.642555 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.645116 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-scripts\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.648324 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqs69\" (UniqueName: \"kubernetes.io/projected/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-kube-api-access-rqs69\") pod \"ceilometer-0\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.740890 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.746629 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:06 crc kubenswrapper[4681]: I1123 07:02:06.746640 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:07 crc kubenswrapper[4681]: W1123 07:02:07.268864 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdf1650a_d637_4140_93dd_f50e7f4bf9d3.slice/crio-e51e39a3459fcdd1cb0839bbc4aaa212c6bb862a63c9b33fe403921e68dc6f73 WatchSource:0}: Error finding container e51e39a3459fcdd1cb0839bbc4aaa212c6bb862a63c9b33fe403921e68dc6f73: Status 404 returned error can't find the container with id e51e39a3459fcdd1cb0839bbc4aaa212c6bb862a63c9b33fe403921e68dc6f73 Nov 23 07:02:07 crc kubenswrapper[4681]: I1123 07:02:07.286063 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5adbedf4-bd97-43af-a48b-b5e10ebff5b0" path="/var/lib/kubelet/pods/5adbedf4-bd97-43af-a48b-b5e10ebff5b0/volumes" Nov 23 07:02:07 crc kubenswrapper[4681]: I1123 07:02:07.286839 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:08 crc kubenswrapper[4681]: I1123 07:02:08.119612 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerStarted","Data":"e51e39a3459fcdd1cb0839bbc4aaa212c6bb862a63c9b33fe403921e68dc6f73"} Nov 23 07:02:09 crc kubenswrapper[4681]: I1123 07:02:09.133009 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerStarted","Data":"07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a"} Nov 23 07:02:09 crc kubenswrapper[4681]: I1123 07:02:09.133393 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerStarted","Data":"508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3"} Nov 23 07:02:10 crc kubenswrapper[4681]: I1123 07:02:10.145473 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerStarted","Data":"898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97"} Nov 23 07:02:10 crc kubenswrapper[4681]: I1123 07:02:10.310232 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:02:10 crc kubenswrapper[4681]: I1123 07:02:10.318283 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:02:10 crc kubenswrapper[4681]: I1123 07:02:10.323848 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:02:11 crc kubenswrapper[4681]: I1123 07:02:11.163768 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:02:12 crc kubenswrapper[4681]: I1123 07:02:12.170162 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerStarted","Data":"64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f"} Nov 23 07:02:12 crc kubenswrapper[4681]: I1123 07:02:12.170248 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:02:12 crc kubenswrapper[4681]: I1123 07:02:12.196069 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.20828002 podStartE2EDuration="6.196049447s" podCreationTimestamp="2025-11-23 07:02:06 +0000 UTC" firstStartedPulling="2025-11-23 07:02:07.284226901 +0000 UTC m=+1064.353736138" lastFinishedPulling="2025-11-23 07:02:11.271996328 +0000 UTC m=+1068.341505565" observedRunningTime="2025-11-23 07:02:12.19283497 +0000 UTC m=+1069.262344207" watchObservedRunningTime="2025-11-23 07:02:12.196049447 +0000 UTC m=+1069.265558684" Nov 23 07:02:13 crc kubenswrapper[4681]: I1123 07:02:13.403516 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.110294 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.187640 4681 generic.go:334] "Generic (PLEG): container finished" podID="1b15ac91-9e57-4a6a-95df-49c853fcbb12" containerID="2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6" exitCode=137 Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.188047 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.188102 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b15ac91-9e57-4a6a-95df-49c853fcbb12","Type":"ContainerDied","Data":"2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6"} Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.188379 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b15ac91-9e57-4a6a-95df-49c853fcbb12","Type":"ContainerDied","Data":"6fb72057016eb6c31e5c4c87319f463e3cece1e7bdfe45260cf1ab1f955f08bf"} Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.188416 4681 scope.go:117] "RemoveContainer" containerID="2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.212695 4681 scope.go:117] "RemoveContainer" containerID="2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6" Nov 23 07:02:14 crc kubenswrapper[4681]: E1123 07:02:14.213421 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6\": container with ID starting with 2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6 not found: ID does not exist" containerID="2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.213475 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6"} err="failed to get container status \"2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6\": rpc error: code = NotFound desc = could not find container \"2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6\": container with ID starting with 2a374971264bdc8a25d2308699dc5ccf6b9f7023733891571d34b29b8c3a1cd6 not found: ID does not exist" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.311026 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-combined-ca-bundle\") pod \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.311113 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-config-data\") pod \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.311137 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfb8n\" (UniqueName: \"kubernetes.io/projected/1b15ac91-9e57-4a6a-95df-49c853fcbb12-kube-api-access-cfb8n\") pod \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\" (UID: \"1b15ac91-9e57-4a6a-95df-49c853fcbb12\") " Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.318309 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b15ac91-9e57-4a6a-95df-49c853fcbb12-kube-api-access-cfb8n" (OuterVolumeSpecName: "kube-api-access-cfb8n") pod "1b15ac91-9e57-4a6a-95df-49c853fcbb12" (UID: "1b15ac91-9e57-4a6a-95df-49c853fcbb12"). InnerVolumeSpecName "kube-api-access-cfb8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.342511 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b15ac91-9e57-4a6a-95df-49c853fcbb12" (UID: "1b15ac91-9e57-4a6a-95df-49c853fcbb12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.345104 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-config-data" (OuterVolumeSpecName: "config-data") pod "1b15ac91-9e57-4a6a-95df-49c853fcbb12" (UID: "1b15ac91-9e57-4a6a-95df-49c853fcbb12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.414359 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.414946 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfb8n\" (UniqueName: \"kubernetes.io/projected/1b15ac91-9e57-4a6a-95df-49c853fcbb12-kube-api-access-cfb8n\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.415013 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b15ac91-9e57-4a6a-95df-49c853fcbb12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.524390 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.534104 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.552696 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:02:14 crc kubenswrapper[4681]: E1123 07:02:14.553556 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b15ac91-9e57-4a6a-95df-49c853fcbb12" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.553578 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b15ac91-9e57-4a6a-95df-49c853fcbb12" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.553940 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b15ac91-9e57-4a6a-95df-49c853fcbb12" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.555086 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.556928 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.563196 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.565089 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.565392 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.721554 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.721773 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.721892 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.721999 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.722098 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwnj\" (UniqueName: \"kubernetes.io/projected/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-kube-api-access-5qwnj\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.824591 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.824648 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.824693 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.824729 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwnj\" (UniqueName: \"kubernetes.io/projected/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-kube-api-access-5qwnj\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.824839 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.828982 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.829313 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.829843 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.832139 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.847982 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwnj\" (UniqueName: \"kubernetes.io/projected/a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012-kube-api-access-5qwnj\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:14 crc kubenswrapper[4681]: I1123 07:02:14.879511 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.265957 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b15ac91-9e57-4a6a-95df-49c853fcbb12" path="/var/lib/kubelet/pods/1b15ac91-9e57-4a6a-95df-49c853fcbb12/volumes" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.655078 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.655442 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.655839 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.655870 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.658871 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.661530 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.817158 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7447b889c5-q9wld"] Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.818735 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.854439 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7447b889c5-q9wld"] Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.878625 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.949580 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzx5\" (UniqueName: \"kubernetes.io/projected/2c2819f3-3efa-41bf-8168-4958cf2bcd15-kube-api-access-nzzx5\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.949663 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-nb\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.949730 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-svc\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.949755 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-sb\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.949799 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-config\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:15 crc kubenswrapper[4681]: I1123 07:02:15.949822 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-swift-storage-0\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.058811 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-svc\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.059083 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-sb\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.059146 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-config\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.059180 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-swift-storage-0\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.059243 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzx5\" (UniqueName: \"kubernetes.io/projected/2c2819f3-3efa-41bf-8168-4958cf2bcd15-kube-api-access-nzzx5\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.059273 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-nb\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.059782 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-svc\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.060166 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-nb\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.060502 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-config\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.060850 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-swift-storage-0\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.061439 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-sb\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.075156 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzx5\" (UniqueName: \"kubernetes.io/projected/2c2819f3-3efa-41bf-8168-4958cf2bcd15-kube-api-access-nzzx5\") pod \"dnsmasq-dns-7447b889c5-q9wld\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.149235 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.228261 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012","Type":"ContainerStarted","Data":"d7a31f333d763d3386c4096e49b7a0c1f0bc550ce7adb14cdcb81b3bdb2f33f7"} Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.228336 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a0dcb8ee-1a8b-47c7-bb12-a9ed891eb012","Type":"ContainerStarted","Data":"e2c33af254b8e223bc4dfe75f05feee65e97ba4beaf221f1b93be44c8c297172"} Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.249646 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.249628222 podStartE2EDuration="2.249628222s" podCreationTimestamp="2025-11-23 07:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:16.243579082 +0000 UTC m=+1073.313088320" watchObservedRunningTime="2025-11-23 07:02:16.249628222 +0000 UTC m=+1073.319137459" Nov 23 07:02:16 crc kubenswrapper[4681]: I1123 07:02:16.633106 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7447b889c5-q9wld"] Nov 23 07:02:17 crc kubenswrapper[4681]: I1123 07:02:17.240157 4681 generic.go:334] "Generic (PLEG): container finished" podID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerID="3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12" exitCode=0 Nov 23 07:02:17 crc kubenswrapper[4681]: I1123 07:02:17.240284 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" event={"ID":"2c2819f3-3efa-41bf-8168-4958cf2bcd15","Type":"ContainerDied","Data":"3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12"} Nov 23 07:02:17 crc kubenswrapper[4681]: I1123 07:02:17.240910 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" event={"ID":"2c2819f3-3efa-41bf-8168-4958cf2bcd15","Type":"ContainerStarted","Data":"727cd8d1121ba2b2004a1a6ff61a1790851b23649e88e6990e60d33178b78f6e"} Nov 23 07:02:18 crc kubenswrapper[4681]: I1123 07:02:18.026929 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:18 crc kubenswrapper[4681]: I1123 07:02:18.253800 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-log" containerID="cri-o://d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165" gracePeriod=30 Nov 23 07:02:18 crc kubenswrapper[4681]: I1123 07:02:18.253902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" event={"ID":"2c2819f3-3efa-41bf-8168-4958cf2bcd15","Type":"ContainerStarted","Data":"ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2"} Nov 23 07:02:18 crc kubenswrapper[4681]: I1123 07:02:18.254341 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-api" containerID="cri-o://0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c" gracePeriod=30 Nov 23 07:02:18 crc kubenswrapper[4681]: I1123 07:02:18.254361 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:18 crc kubenswrapper[4681]: I1123 07:02:18.288898 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" podStartSLOduration=3.288872591 podStartE2EDuration="3.288872591s" podCreationTimestamp="2025-11-23 07:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:18.280782696 +0000 UTC m=+1075.350291933" watchObservedRunningTime="2025-11-23 07:02:18.288872591 +0000 UTC m=+1075.358381828" Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.263972 4681 generic.go:334] "Generic (PLEG): container finished" podID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerID="d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165" exitCode=143 Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.264977 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4359f19c-c1cc-4539-8010-40b4941cddbc","Type":"ContainerDied","Data":"d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165"} Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.469841 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.470211 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-central-agent" containerID="cri-o://508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3" gracePeriod=30 Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.470320 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="sg-core" containerID="cri-o://898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97" gracePeriod=30 Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.470376 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="proxy-httpd" containerID="cri-o://64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f" gracePeriod=30 Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.470387 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-notification-agent" containerID="cri-o://07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a" gracePeriod=30 Nov 23 07:02:19 crc kubenswrapper[4681]: I1123 07:02:19.879637 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:20 crc kubenswrapper[4681]: I1123 07:02:20.277940 4681 generic.go:334] "Generic (PLEG): container finished" podID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerID="64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f" exitCode=0 Nov 23 07:02:20 crc kubenswrapper[4681]: I1123 07:02:20.278268 4681 generic.go:334] "Generic (PLEG): container finished" podID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerID="898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97" exitCode=2 Nov 23 07:02:20 crc kubenswrapper[4681]: I1123 07:02:20.278280 4681 generic.go:334] "Generic (PLEG): container finished" podID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerID="508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3" exitCode=0 Nov 23 07:02:20 crc kubenswrapper[4681]: I1123 07:02:20.278027 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerDied","Data":"64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f"} Nov 23 07:02:20 crc kubenswrapper[4681]: I1123 07:02:20.278339 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerDied","Data":"898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97"} Nov 23 07:02:20 crc kubenswrapper[4681]: I1123 07:02:20.278356 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerDied","Data":"508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3"} Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.794365 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.895998 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssp7b\" (UniqueName: \"kubernetes.io/projected/4359f19c-c1cc-4539-8010-40b4941cddbc-kube-api-access-ssp7b\") pod \"4359f19c-c1cc-4539-8010-40b4941cddbc\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.896091 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4359f19c-c1cc-4539-8010-40b4941cddbc-logs\") pod \"4359f19c-c1cc-4539-8010-40b4941cddbc\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.896282 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-combined-ca-bundle\") pod \"4359f19c-c1cc-4539-8010-40b4941cddbc\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.896443 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-config-data\") pod \"4359f19c-c1cc-4539-8010-40b4941cddbc\" (UID: \"4359f19c-c1cc-4539-8010-40b4941cddbc\") " Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.901795 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4359f19c-c1cc-4539-8010-40b4941cddbc-logs" (OuterVolumeSpecName: "logs") pod "4359f19c-c1cc-4539-8010-40b4941cddbc" (UID: "4359f19c-c1cc-4539-8010-40b4941cddbc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.909125 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4359f19c-c1cc-4539-8010-40b4941cddbc-kube-api-access-ssp7b" (OuterVolumeSpecName: "kube-api-access-ssp7b") pod "4359f19c-c1cc-4539-8010-40b4941cddbc" (UID: "4359f19c-c1cc-4539-8010-40b4941cddbc"). InnerVolumeSpecName "kube-api-access-ssp7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.925120 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-config-data" (OuterVolumeSpecName: "config-data") pod "4359f19c-c1cc-4539-8010-40b4941cddbc" (UID: "4359f19c-c1cc-4539-8010-40b4941cddbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:21 crc kubenswrapper[4681]: I1123 07:02:21.935597 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4359f19c-c1cc-4539-8010-40b4941cddbc" (UID: "4359f19c-c1cc-4539-8010-40b4941cddbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.000025 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.000058 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4359f19c-c1cc-4539-8010-40b4941cddbc-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.000071 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssp7b\" (UniqueName: \"kubernetes.io/projected/4359f19c-c1cc-4539-8010-40b4941cddbc-kube-api-access-ssp7b\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.000089 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4359f19c-c1cc-4539-8010-40b4941cddbc-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.300765 4681 generic.go:334] "Generic (PLEG): container finished" podID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerID="0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c" exitCode=0 Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.301046 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4359f19c-c1cc-4539-8010-40b4941cddbc","Type":"ContainerDied","Data":"0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c"} Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.301079 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4359f19c-c1cc-4539-8010-40b4941cddbc","Type":"ContainerDied","Data":"4708b48130b04d134f0138316e65b27ea9dc526f7d3b121a8d8d78a5aa469ea4"} Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.301098 4681 scope.go:117] "RemoveContainer" containerID="0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.301241 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.339003 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.348421 4681 scope.go:117] "RemoveContainer" containerID="d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.360367 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.376928 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:22 crc kubenswrapper[4681]: E1123 07:02:22.377728 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-api" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.377749 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-api" Nov 23 07:02:22 crc kubenswrapper[4681]: E1123 07:02:22.377779 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-log" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.377785 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-log" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.377986 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-log" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.377997 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" containerName="nova-api-api" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.379181 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.381755 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.381974 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.382132 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.382192 4681 scope.go:117] "RemoveContainer" containerID="0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.386805 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:22 crc kubenswrapper[4681]: E1123 07:02:22.389346 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c\": container with ID starting with 0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c not found: ID does not exist" containerID="0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.389380 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c"} err="failed to get container status \"0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c\": rpc error: code = NotFound desc = could not find container \"0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c\": container with ID starting with 0d4347ecfabd41d98bc3c9020e041886accc5db6b5636e79f046b472137fd18c not found: ID does not exist" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.389414 4681 scope.go:117] "RemoveContainer" containerID="d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165" Nov 23 07:02:22 crc kubenswrapper[4681]: E1123 07:02:22.393159 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165\": container with ID starting with d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165 not found: ID does not exist" containerID="d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.393240 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165"} err="failed to get container status \"d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165\": rpc error: code = NotFound desc = could not find container \"d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165\": container with ID starting with d55fcdb1c25a808f965308d469055207610ce4266005118aac0eb98dac284165 not found: ID does not exist" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.520729 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.520814 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-config-data\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.520953 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxs87\" (UniqueName: \"kubernetes.io/projected/c7dea7e9-1959-401a-8915-6863b8a3b198-kube-api-access-rxs87\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.520991 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dea7e9-1959-401a-8915-6863b8a3b198-logs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.521152 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.521213 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.623808 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.623932 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.623966 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-config-data\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.624682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxs87\" (UniqueName: \"kubernetes.io/projected/c7dea7e9-1959-401a-8915-6863b8a3b198-kube-api-access-rxs87\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.624714 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dea7e9-1959-401a-8915-6863b8a3b198-logs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.624783 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.625216 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dea7e9-1959-401a-8915-6863b8a3b198-logs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.628628 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.628744 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.633786 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.635302 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-config-data\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.640046 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxs87\" (UniqueName: \"kubernetes.io/projected/c7dea7e9-1959-401a-8915-6863b8a3b198-kube-api-access-rxs87\") pod \"nova-api-0\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " pod="openstack/nova-api-0" Nov 23 07:02:22 crc kubenswrapper[4681]: I1123 07:02:22.701424 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.155928 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.267031 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4359f19c-c1cc-4539-8010-40b4941cddbc" path="/var/lib/kubelet/pods/4359f19c-c1cc-4539-8010-40b4941cddbc/volumes" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.319220 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7dea7e9-1959-401a-8915-6863b8a3b198","Type":"ContainerStarted","Data":"b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036"} Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.319586 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7dea7e9-1959-401a-8915-6863b8a3b198","Type":"ContainerStarted","Data":"eeba0c0008eb61ce0ba6fbd3150c565f03617af912f594164e49492b81b1991d"} Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.866782 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.957598 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-ceilometer-tls-certs\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.957663 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-run-httpd\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.957715 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-combined-ca-bundle\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.957824 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-scripts\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.957905 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqs69\" (UniqueName: \"kubernetes.io/projected/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-kube-api-access-rqs69\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.957927 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-log-httpd\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.957988 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-config-data\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.958009 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-sg-core-conf-yaml\") pod \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\" (UID: \"cdf1650a-d637-4140-93dd-f50e7f4bf9d3\") " Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.958242 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.958589 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.958609 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.979245 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-scripts" (OuterVolumeSpecName: "scripts") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.979291 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-kube-api-access-rqs69" (OuterVolumeSpecName: "kube-api-access-rqs69") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "kube-api-access-rqs69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:23 crc kubenswrapper[4681]: I1123 07:02:23.988749 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.009862 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.031039 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.039959 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-config-data" (OuterVolumeSpecName: "config-data") pod "cdf1650a-d637-4140-93dd-f50e7f4bf9d3" (UID: "cdf1650a-d637-4140-93dd-f50e7f4bf9d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.060139 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.060173 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.060183 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqs69\" (UniqueName: \"kubernetes.io/projected/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-kube-api-access-rqs69\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.060196 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.060206 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.060216 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.060226 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf1650a-d637-4140-93dd-f50e7f4bf9d3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.345375 4681 generic.go:334] "Generic (PLEG): container finished" podID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerID="07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a" exitCode=0 Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.345490 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerDied","Data":"07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a"} Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.345530 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdf1650a-d637-4140-93dd-f50e7f4bf9d3","Type":"ContainerDied","Data":"e51e39a3459fcdd1cb0839bbc4aaa212c6bb862a63c9b33fe403921e68dc6f73"} Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.345549 4681 scope.go:117] "RemoveContainer" containerID="64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.346566 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.347836 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7dea7e9-1959-401a-8915-6863b8a3b198","Type":"ContainerStarted","Data":"02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb"} Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.372052 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.372032422 podStartE2EDuration="2.372032422s" podCreationTimestamp="2025-11-23 07:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:24.366493975 +0000 UTC m=+1081.436003213" watchObservedRunningTime="2025-11-23 07:02:24.372032422 +0000 UTC m=+1081.441541660" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.372186 4681 scope.go:117] "RemoveContainer" containerID="898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.391647 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.398014 4681 scope.go:117] "RemoveContainer" containerID="07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.398121 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.414610 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.415271 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="proxy-httpd" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.415354 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="proxy-httpd" Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.415405 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-central-agent" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.415472 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-central-agent" Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.415525 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-notification-agent" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.415566 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-notification-agent" Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.415609 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="sg-core" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.415657 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="sg-core" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.416298 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="proxy-httpd" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.416374 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="sg-core" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.416433 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-notification-agent" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.416565 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" containerName="ceilometer-central-agent" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.420522 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.425411 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.425947 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.425937 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.427433 4681 scope.go:117] "RemoveContainer" containerID="508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.433257 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.463507 4681 scope.go:117] "RemoveContainer" containerID="64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f" Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.464031 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f\": container with ID starting with 64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f not found: ID does not exist" containerID="64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.464084 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f"} err="failed to get container status \"64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f\": rpc error: code = NotFound desc = could not find container \"64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f\": container with ID starting with 64a10f73deb6644383c7764e2f2604c74c0b538a56bdf550ea4fb26fbd18933f not found: ID does not exist" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.464122 4681 scope.go:117] "RemoveContainer" containerID="898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97" Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.465616 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97\": container with ID starting with 898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97 not found: ID does not exist" containerID="898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.465645 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97"} err="failed to get container status \"898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97\": rpc error: code = NotFound desc = could not find container \"898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97\": container with ID starting with 898a5b8e3cd5f46dd817a835ad0454323b159638dec12c6fddd4add76f815e97 not found: ID does not exist" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.465667 4681 scope.go:117] "RemoveContainer" containerID="07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a" Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.465975 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a\": container with ID starting with 07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a not found: ID does not exist" containerID="07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.466019 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a"} err="failed to get container status \"07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a\": rpc error: code = NotFound desc = could not find container \"07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a\": container with ID starting with 07b28656439a6476f2d71676ed1948f778d4d820623a32c599c7aa113f9dc98a not found: ID does not exist" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.466051 4681 scope.go:117] "RemoveContainer" containerID="508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3" Nov 23 07:02:24 crc kubenswrapper[4681]: E1123 07:02:24.466426 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3\": container with ID starting with 508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3 not found: ID does not exist" containerID="508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.466482 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3"} err="failed to get container status \"508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3\": rpc error: code = NotFound desc = could not find container \"508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3\": container with ID starting with 508dff3d39f0fc246ebdbdb7b48b38276e3b76047aa22368062c2e4840af6ee3 not found: ID does not exist" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.570879 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6mw\" (UniqueName: \"kubernetes.io/projected/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-kube-api-access-9f6mw\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.570970 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.571018 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.571075 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-scripts\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.571129 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.571155 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-log-httpd\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.571242 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-config-data\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.571519 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-run-httpd\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.673953 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-config-data\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674310 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-run-httpd\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674422 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6mw\" (UniqueName: \"kubernetes.io/projected/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-kube-api-access-9f6mw\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674449 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674494 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674527 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-scripts\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674560 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674576 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-log-httpd\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674807 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-run-httpd\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.674918 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-log-httpd\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.677870 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.677882 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.678296 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.678935 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-config-data\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.683105 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-scripts\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.688536 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6mw\" (UniqueName: \"kubernetes.io/projected/2a145e0f-8702-45d4-a0f5-76f1d7b13a4a-kube-api-access-9f6mw\") pod \"ceilometer-0\" (UID: \"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a\") " pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.737615 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.879854 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:24 crc kubenswrapper[4681]: I1123 07:02:24.907384 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.147252 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:02:25 crc kubenswrapper[4681]: W1123 07:02:25.148858 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a145e0f_8702_45d4_a0f5_76f1d7b13a4a.slice/crio-575243e15ec08d4b23eac088af5d4611b5c259026b6a7f305fbfd253088ad9c1 WatchSource:0}: Error finding container 575243e15ec08d4b23eac088af5d4611b5c259026b6a7f305fbfd253088ad9c1: Status 404 returned error can't find the container with id 575243e15ec08d4b23eac088af5d4611b5c259026b6a7f305fbfd253088ad9c1 Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.262561 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf1650a-d637-4140-93dd-f50e7f4bf9d3" path="/var/lib/kubelet/pods/cdf1650a-d637-4140-93dd-f50e7f4bf9d3/volumes" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.363849 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a","Type":"ContainerStarted","Data":"575243e15ec08d4b23eac088af5d4611b5c259026b6a7f305fbfd253088ad9c1"} Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.390267 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.541613 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-px75c"] Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.543499 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.545383 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.546445 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.551699 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-px75c"] Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.596668 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-scripts\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.596726 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.596753 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnthj\" (UniqueName: \"kubernetes.io/projected/5461efc5-e9c2-4a64-a74d-8db6df47c452-kube-api-access-rnthj\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.596787 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-config-data\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.700557 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.701491 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnthj\" (UniqueName: \"kubernetes.io/projected/5461efc5-e9c2-4a64-a74d-8db6df47c452-kube-api-access-rnthj\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.701753 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-scripts\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.701871 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-config-data\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.710141 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-scripts\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.710230 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.713207 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-config-data\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.726044 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnthj\" (UniqueName: \"kubernetes.io/projected/5461efc5-e9c2-4a64-a74d-8db6df47c452-kube-api-access-rnthj\") pod \"nova-cell1-cell-mapping-px75c\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:25 crc kubenswrapper[4681]: I1123 07:02:25.862548 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.150618 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.223400 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff89994d9-cs2z8"] Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.224526 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" podUID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerName="dnsmasq-dns" containerID="cri-o://2d0a328ee937bb96b94c26952a1e44237421a7c7f3b82fa554bde87ac7408a75" gracePeriod=10 Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.309569 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-px75c"] Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.408652 4681 generic.go:334] "Generic (PLEG): container finished" podID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerID="2d0a328ee937bb96b94c26952a1e44237421a7c7f3b82fa554bde87ac7408a75" exitCode=0 Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.408798 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" event={"ID":"759adc42-6c4c-4c47-b7d7-ec5eef16623a","Type":"ContainerDied","Data":"2d0a328ee937bb96b94c26952a1e44237421a7c7f3b82fa554bde87ac7408a75"} Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.415852 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a","Type":"ContainerStarted","Data":"8288b55a5e31a608fe302062fc63f0668479b127c7c84a0b4152dd48dac5093f"} Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.426147 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-px75c" event={"ID":"5461efc5-e9c2-4a64-a74d-8db6df47c452","Type":"ContainerStarted","Data":"8905226672cb4873b31350dc1eec3d32ab56b98222b0f44422c8bf49250615e0"} Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.796674 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.934810 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-config\") pod \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.934930 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-sb\") pod \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.935019 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-swift-storage-0\") pod \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.935075 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-nb\") pod \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.935287 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-svc\") pod \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.935318 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lb9p\" (UniqueName: \"kubernetes.io/projected/759adc42-6c4c-4c47-b7d7-ec5eef16623a-kube-api-access-9lb9p\") pod \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\" (UID: \"759adc42-6c4c-4c47-b7d7-ec5eef16623a\") " Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.946170 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759adc42-6c4c-4c47-b7d7-ec5eef16623a-kube-api-access-9lb9p" (OuterVolumeSpecName: "kube-api-access-9lb9p") pod "759adc42-6c4c-4c47-b7d7-ec5eef16623a" (UID: "759adc42-6c4c-4c47-b7d7-ec5eef16623a"). InnerVolumeSpecName "kube-api-access-9lb9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:26 crc kubenswrapper[4681]: I1123 07:02:26.993182 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "759adc42-6c4c-4c47-b7d7-ec5eef16623a" (UID: "759adc42-6c4c-4c47-b7d7-ec5eef16623a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.013274 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "759adc42-6c4c-4c47-b7d7-ec5eef16623a" (UID: "759adc42-6c4c-4c47-b7d7-ec5eef16623a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.017009 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-config" (OuterVolumeSpecName: "config") pod "759adc42-6c4c-4c47-b7d7-ec5eef16623a" (UID: "759adc42-6c4c-4c47-b7d7-ec5eef16623a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.038926 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.039069 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.039142 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.039205 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lb9p\" (UniqueName: \"kubernetes.io/projected/759adc42-6c4c-4c47-b7d7-ec5eef16623a-kube-api-access-9lb9p\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.039501 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "759adc42-6c4c-4c47-b7d7-ec5eef16623a" (UID: "759adc42-6c4c-4c47-b7d7-ec5eef16623a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.039972 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "759adc42-6c4c-4c47-b7d7-ec5eef16623a" (UID: "759adc42-6c4c-4c47-b7d7-ec5eef16623a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.141867 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.141911 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/759adc42-6c4c-4c47-b7d7-ec5eef16623a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.444397 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a","Type":"ContainerStarted","Data":"1bb06d064375b1fa1ec9949acd3369213a2e30dadd2132fad1fce690ec1d82a1"} Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.446934 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-px75c" event={"ID":"5461efc5-e9c2-4a64-a74d-8db6df47c452","Type":"ContainerStarted","Data":"d8444ffdf1771f0db5e1c5e8e110496dc663776278c07287adcf74f78a448a9f"} Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.457075 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" event={"ID":"759adc42-6c4c-4c47-b7d7-ec5eef16623a","Type":"ContainerDied","Data":"a74699858628156d6d4f8bfff5c36cb95953baf11641317842c447f95c8c5dda"} Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.457223 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff89994d9-cs2z8" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.457226 4681 scope.go:117] "RemoveContainer" containerID="2d0a328ee937bb96b94c26952a1e44237421a7c7f3b82fa554bde87ac7408a75" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.484273 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-px75c" podStartSLOduration=2.484251472 podStartE2EDuration="2.484251472s" podCreationTimestamp="2025-11-23 07:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:27.467589974 +0000 UTC m=+1084.537099212" watchObservedRunningTime="2025-11-23 07:02:27.484251472 +0000 UTC m=+1084.553760710" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.499033 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff89994d9-cs2z8"] Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.503859 4681 scope.go:117] "RemoveContainer" containerID="72063394ebbfc079b71a6c6320edfd1851027c67ff111bae146384a315332d8a" Nov 23 07:02:27 crc kubenswrapper[4681]: I1123 07:02:27.516912 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff89994d9-cs2z8"] Nov 23 07:02:28 crc kubenswrapper[4681]: I1123 07:02:28.471113 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a","Type":"ContainerStarted","Data":"289b9faa6eb15dfca021afb7363c359748ad17c47d530ea074811e2d9fb8dfd8"} Nov 23 07:02:29 crc kubenswrapper[4681]: I1123 07:02:29.263826 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" path="/var/lib/kubelet/pods/759adc42-6c4c-4c47-b7d7-ec5eef16623a/volumes" Nov 23 07:02:29 crc kubenswrapper[4681]: I1123 07:02:29.484016 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a145e0f-8702-45d4-a0f5-76f1d7b13a4a","Type":"ContainerStarted","Data":"5f36a5ebcfae0f01e39c0d4ac98f72b05a52aa6d7795e07580d61cb9bcb04709"} Nov 23 07:02:29 crc kubenswrapper[4681]: I1123 07:02:29.484842 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:02:29 crc kubenswrapper[4681]: I1123 07:02:29.505969 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.016981295 podStartE2EDuration="5.505943564s" podCreationTimestamp="2025-11-23 07:02:24 +0000 UTC" firstStartedPulling="2025-11-23 07:02:25.151533265 +0000 UTC m=+1082.221042502" lastFinishedPulling="2025-11-23 07:02:28.640495534 +0000 UTC m=+1085.710004771" observedRunningTime="2025-11-23 07:02:29.502306901 +0000 UTC m=+1086.571816128" watchObservedRunningTime="2025-11-23 07:02:29.505943564 +0000 UTC m=+1086.575452801" Nov 23 07:02:31 crc kubenswrapper[4681]: I1123 07:02:31.522546 4681 generic.go:334] "Generic (PLEG): container finished" podID="5461efc5-e9c2-4a64-a74d-8db6df47c452" containerID="d8444ffdf1771f0db5e1c5e8e110496dc663776278c07287adcf74f78a448a9f" exitCode=0 Nov 23 07:02:31 crc kubenswrapper[4681]: I1123 07:02:31.523041 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-px75c" event={"ID":"5461efc5-e9c2-4a64-a74d-8db6df47c452","Type":"ContainerDied","Data":"d8444ffdf1771f0db5e1c5e8e110496dc663776278c07287adcf74f78a448a9f"} Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.702366 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.705796 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.855098 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.971195 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-config-data\") pod \"5461efc5-e9c2-4a64-a74d-8db6df47c452\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.971328 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-scripts\") pod \"5461efc5-e9c2-4a64-a74d-8db6df47c452\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.971374 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnthj\" (UniqueName: \"kubernetes.io/projected/5461efc5-e9c2-4a64-a74d-8db6df47c452-kube-api-access-rnthj\") pod \"5461efc5-e9c2-4a64-a74d-8db6df47c452\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.971400 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-combined-ca-bundle\") pod \"5461efc5-e9c2-4a64-a74d-8db6df47c452\" (UID: \"5461efc5-e9c2-4a64-a74d-8db6df47c452\") " Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.979888 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5461efc5-e9c2-4a64-a74d-8db6df47c452-kube-api-access-rnthj" (OuterVolumeSpecName: "kube-api-access-rnthj") pod "5461efc5-e9c2-4a64-a74d-8db6df47c452" (UID: "5461efc5-e9c2-4a64-a74d-8db6df47c452"). InnerVolumeSpecName "kube-api-access-rnthj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:32 crc kubenswrapper[4681]: I1123 07:02:32.987599 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-scripts" (OuterVolumeSpecName: "scripts") pod "5461efc5-e9c2-4a64-a74d-8db6df47c452" (UID: "5461efc5-e9c2-4a64-a74d-8db6df47c452"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.007270 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5461efc5-e9c2-4a64-a74d-8db6df47c452" (UID: "5461efc5-e9c2-4a64-a74d-8db6df47c452"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.009349 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-config-data" (OuterVolumeSpecName: "config-data") pod "5461efc5-e9c2-4a64-a74d-8db6df47c452" (UID: "5461efc5-e9c2-4a64-a74d-8db6df47c452"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.074425 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.074455 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnthj\" (UniqueName: \"kubernetes.io/projected/5461efc5-e9c2-4a64-a74d-8db6df47c452-kube-api-access-rnthj\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.074480 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.074488 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5461efc5-e9c2-4a64-a74d-8db6df47c452-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.542624 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-px75c" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.543089 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-px75c" event={"ID":"5461efc5-e9c2-4a64-a74d-8db6df47c452","Type":"ContainerDied","Data":"8905226672cb4873b31350dc1eec3d32ab56b98222b0f44422c8bf49250615e0"} Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.543116 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8905226672cb4873b31350dc1eec3d32ab56b98222b0f44422c8bf49250615e0" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.718602 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.718650 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.726907 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.732757 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.739564 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="baf0217f-0783-4d59-81bf-a745d255e69b" containerName="nova-scheduler-scheduler" containerID="cri-o://d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" gracePeriod=30 Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.769098 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.769330 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-log" containerID="cri-o://786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80" gracePeriod=30 Nov 23 07:02:33 crc kubenswrapper[4681]: I1123 07:02:33.769759 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-metadata" containerID="cri-o://97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887" gracePeriod=30 Nov 23 07:02:34 crc kubenswrapper[4681]: I1123 07:02:34.551928 4681 generic.go:334] "Generic (PLEG): container finished" podID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerID="786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80" exitCode=143 Nov 23 07:02:34 crc kubenswrapper[4681]: I1123 07:02:34.552013 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4892cf49-6ef2-4d78-893a-fa0995817fb9","Type":"ContainerDied","Data":"786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80"} Nov 23 07:02:34 crc kubenswrapper[4681]: I1123 07:02:34.552385 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-log" containerID="cri-o://b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036" gracePeriod=30 Nov 23 07:02:34 crc kubenswrapper[4681]: I1123 07:02:34.552451 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-api" containerID="cri-o://02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb" gracePeriod=30 Nov 23 07:02:35 crc kubenswrapper[4681]: E1123 07:02:35.327657 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:02:35 crc kubenswrapper[4681]: E1123 07:02:35.331107 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:02:35 crc kubenswrapper[4681]: E1123 07:02:35.334352 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:02:35 crc kubenswrapper[4681]: E1123 07:02:35.334407 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="baf0217f-0783-4d59-81bf-a745d255e69b" containerName="nova-scheduler-scheduler" Nov 23 07:02:35 crc kubenswrapper[4681]: I1123 07:02:35.563505 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerID="b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036" exitCode=143 Nov 23 07:02:35 crc kubenswrapper[4681]: I1123 07:02:35.563560 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7dea7e9-1959-401a-8915-6863b8a3b198","Type":"ContainerDied","Data":"b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036"} Nov 23 07:02:36 crc kubenswrapper[4681]: I1123 07:02:36.940035 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": read tcp 10.217.0.2:50222->10.217.0.211:8775: read: connection reset by peer" Nov 23 07:02:36 crc kubenswrapper[4681]: I1123 07:02:36.940083 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": read tcp 10.217.0.2:50238->10.217.0.211:8775: read: connection reset by peer" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.324341 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.379747 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4892cf49-6ef2-4d78-893a-fa0995817fb9-logs\") pod \"4892cf49-6ef2-4d78-893a-fa0995817fb9\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.379802 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vzcz\" (UniqueName: \"kubernetes.io/projected/4892cf49-6ef2-4d78-893a-fa0995817fb9-kube-api-access-2vzcz\") pod \"4892cf49-6ef2-4d78-893a-fa0995817fb9\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.379869 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-combined-ca-bundle\") pod \"4892cf49-6ef2-4d78-893a-fa0995817fb9\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.381506 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4892cf49-6ef2-4d78-893a-fa0995817fb9-logs" (OuterVolumeSpecName: "logs") pod "4892cf49-6ef2-4d78-893a-fa0995817fb9" (UID: "4892cf49-6ef2-4d78-893a-fa0995817fb9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.406776 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4892cf49-6ef2-4d78-893a-fa0995817fb9-kube-api-access-2vzcz" (OuterVolumeSpecName: "kube-api-access-2vzcz") pod "4892cf49-6ef2-4d78-893a-fa0995817fb9" (UID: "4892cf49-6ef2-4d78-893a-fa0995817fb9"). InnerVolumeSpecName "kube-api-access-2vzcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.475631 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4892cf49-6ef2-4d78-893a-fa0995817fb9" (UID: "4892cf49-6ef2-4d78-893a-fa0995817fb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.487101 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-nova-metadata-tls-certs\") pod \"4892cf49-6ef2-4d78-893a-fa0995817fb9\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.487784 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-config-data\") pod \"4892cf49-6ef2-4d78-893a-fa0995817fb9\" (UID: \"4892cf49-6ef2-4d78-893a-fa0995817fb9\") " Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.488380 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4892cf49-6ef2-4d78-893a-fa0995817fb9-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.488409 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vzcz\" (UniqueName: \"kubernetes.io/projected/4892cf49-6ef2-4d78-893a-fa0995817fb9-kube-api-access-2vzcz\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.488424 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.532591 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-config-data" (OuterVolumeSpecName: "config-data") pod "4892cf49-6ef2-4d78-893a-fa0995817fb9" (UID: "4892cf49-6ef2-4d78-893a-fa0995817fb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.589344 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.607627 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4892cf49-6ef2-4d78-893a-fa0995817fb9" (UID: "4892cf49-6ef2-4d78-893a-fa0995817fb9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.608062 4681 generic.go:334] "Generic (PLEG): container finished" podID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerID="97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.608121 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4892cf49-6ef2-4d78-893a-fa0995817fb9","Type":"ContainerDied","Data":"97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887"} Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.608161 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4892cf49-6ef2-4d78-893a-fa0995817fb9","Type":"ContainerDied","Data":"fbca7a8a18e12d6eb72f7e00240b8c55371f540d5ae842c419201fbb0790293d"} Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.608184 4681 scope.go:117] "RemoveContainer" containerID="97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.608362 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.655528 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.669861 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.674224 4681 scope.go:117] "RemoveContainer" containerID="786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.692122 4681 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4892cf49-6ef2-4d78-893a-fa0995817fb9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.710143 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:02:37 crc kubenswrapper[4681]: E1123 07:02:37.710646 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-metadata" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.710662 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-metadata" Nov 23 07:02:37 crc kubenswrapper[4681]: E1123 07:02:37.710680 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerName="init" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.710686 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerName="init" Nov 23 07:02:37 crc kubenswrapper[4681]: E1123 07:02:37.710700 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerName="dnsmasq-dns" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.710706 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerName="dnsmasq-dns" Nov 23 07:02:37 crc kubenswrapper[4681]: E1123 07:02:37.710723 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5461efc5-e9c2-4a64-a74d-8db6df47c452" containerName="nova-manage" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.710730 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5461efc5-e9c2-4a64-a74d-8db6df47c452" containerName="nova-manage" Nov 23 07:02:37 crc kubenswrapper[4681]: E1123 07:02:37.710761 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-log" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.710767 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-log" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.711014 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-log" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.711028 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5461efc5-e9c2-4a64-a74d-8db6df47c452" containerName="nova-manage" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.711043 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="759adc42-6c4c-4c47-b7d7-ec5eef16623a" containerName="dnsmasq-dns" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.711051 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" containerName="nova-metadata-metadata" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.712922 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.718081 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.718247 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.737012 4681 scope.go:117] "RemoveContainer" containerID="97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887" Nov 23 07:02:37 crc kubenswrapper[4681]: E1123 07:02:37.737869 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887\": container with ID starting with 97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887 not found: ID does not exist" containerID="97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.737910 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887"} err="failed to get container status \"97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887\": rpc error: code = NotFound desc = could not find container \"97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887\": container with ID starting with 97f8dd3b8af4925d7749059d643561b37bda8710fc203183ebfddd7aeb2ea887 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.737934 4681 scope.go:117] "RemoveContainer" containerID="786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.738671 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:02:37 crc kubenswrapper[4681]: E1123 07:02:37.742719 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80\": container with ID starting with 786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80 not found: ID does not exist" containerID="786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.742756 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80"} err="failed to get container status \"786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80\": rpc error: code = NotFound desc = could not find container \"786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80\": container with ID starting with 786fac5f4a787db447c7f95567133c2d6dfaf25b3b9c4541890385554a0c1d80 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.793447 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643428-58d9-4480-97ae-945959d1be83-logs\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.793533 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.793628 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22nmg\" (UniqueName: \"kubernetes.io/projected/6a643428-58d9-4480-97ae-945959d1be83-kube-api-access-22nmg\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.793706 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.793743 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-config-data\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.894701 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.894757 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-config-data\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.894842 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643428-58d9-4480-97ae-945959d1be83-logs\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.894894 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.894988 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22nmg\" (UniqueName: \"kubernetes.io/projected/6a643428-58d9-4480-97ae-945959d1be83-kube-api-access-22nmg\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.895428 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a643428-58d9-4480-97ae-945959d1be83-logs\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.901148 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.901188 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-config-data\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.901908 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a643428-58d9-4480-97ae-945959d1be83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:37 crc kubenswrapper[4681]: I1123 07:02:37.910104 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22nmg\" (UniqueName: \"kubernetes.io/projected/6a643428-58d9-4480-97ae-945959d1be83-kube-api-access-22nmg\") pod \"nova-metadata-0\" (UID: \"6a643428-58d9-4480-97ae-945959d1be83\") " pod="openstack/nova-metadata-0" Nov 23 07:02:38 crc kubenswrapper[4681]: I1123 07:02:38.038002 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:02:38 crc kubenswrapper[4681]: I1123 07:02:38.481137 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:02:38 crc kubenswrapper[4681]: I1123 07:02:38.631624 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a643428-58d9-4480-97ae-945959d1be83","Type":"ContainerStarted","Data":"f4ef61e55c00134d8614d8d37dcecf340a953ea5451411b8dabec91db06e2e87"} Nov 23 07:02:38 crc kubenswrapper[4681]: I1123 07:02:38.936989 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.120709 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-config-data\") pod \"baf0217f-0783-4d59-81bf-a745d255e69b\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.121099 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2tgv\" (UniqueName: \"kubernetes.io/projected/baf0217f-0783-4d59-81bf-a745d255e69b-kube-api-access-g2tgv\") pod \"baf0217f-0783-4d59-81bf-a745d255e69b\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.121372 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-combined-ca-bundle\") pod \"baf0217f-0783-4d59-81bf-a745d255e69b\" (UID: \"baf0217f-0783-4d59-81bf-a745d255e69b\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.130554 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf0217f-0783-4d59-81bf-a745d255e69b-kube-api-access-g2tgv" (OuterVolumeSpecName: "kube-api-access-g2tgv") pod "baf0217f-0783-4d59-81bf-a745d255e69b" (UID: "baf0217f-0783-4d59-81bf-a745d255e69b"). InnerVolumeSpecName "kube-api-access-g2tgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.172189 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baf0217f-0783-4d59-81bf-a745d255e69b" (UID: "baf0217f-0783-4d59-81bf-a745d255e69b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.177127 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-config-data" (OuterVolumeSpecName: "config-data") pod "baf0217f-0783-4d59-81bf-a745d255e69b" (UID: "baf0217f-0783-4d59-81bf-a745d255e69b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.225527 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.225570 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf0217f-0783-4d59-81bf-a745d255e69b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.225583 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2tgv\" (UniqueName: \"kubernetes.io/projected/baf0217f-0783-4d59-81bf-a745d255e69b-kube-api-access-g2tgv\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.264104 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4892cf49-6ef2-4d78-893a-fa0995817fb9" path="/var/lib/kubelet/pods/4892cf49-6ef2-4d78-893a-fa0995817fb9/volumes" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.366816 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.530871 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-public-tls-certs\") pod \"c7dea7e9-1959-401a-8915-6863b8a3b198\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.530984 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dea7e9-1959-401a-8915-6863b8a3b198-logs\") pod \"c7dea7e9-1959-401a-8915-6863b8a3b198\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.531130 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-combined-ca-bundle\") pod \"c7dea7e9-1959-401a-8915-6863b8a3b198\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.531313 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxs87\" (UniqueName: \"kubernetes.io/projected/c7dea7e9-1959-401a-8915-6863b8a3b198-kube-api-access-rxs87\") pod \"c7dea7e9-1959-401a-8915-6863b8a3b198\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.531432 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-config-data\") pod \"c7dea7e9-1959-401a-8915-6863b8a3b198\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.531509 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-internal-tls-certs\") pod \"c7dea7e9-1959-401a-8915-6863b8a3b198\" (UID: \"c7dea7e9-1959-401a-8915-6863b8a3b198\") " Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.531620 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7dea7e9-1959-401a-8915-6863b8a3b198-logs" (OuterVolumeSpecName: "logs") pod "c7dea7e9-1959-401a-8915-6863b8a3b198" (UID: "c7dea7e9-1959-401a-8915-6863b8a3b198"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.531940 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dea7e9-1959-401a-8915-6863b8a3b198-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.535688 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7dea7e9-1959-401a-8915-6863b8a3b198-kube-api-access-rxs87" (OuterVolumeSpecName: "kube-api-access-rxs87") pod "c7dea7e9-1959-401a-8915-6863b8a3b198" (UID: "c7dea7e9-1959-401a-8915-6863b8a3b198"). InnerVolumeSpecName "kube-api-access-rxs87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.557324 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7dea7e9-1959-401a-8915-6863b8a3b198" (UID: "c7dea7e9-1959-401a-8915-6863b8a3b198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.560014 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-config-data" (OuterVolumeSpecName: "config-data") pod "c7dea7e9-1959-401a-8915-6863b8a3b198" (UID: "c7dea7e9-1959-401a-8915-6863b8a3b198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.582233 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7dea7e9-1959-401a-8915-6863b8a3b198" (UID: "c7dea7e9-1959-401a-8915-6863b8a3b198"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.591661 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7dea7e9-1959-401a-8915-6863b8a3b198" (UID: "c7dea7e9-1959-401a-8915-6863b8a3b198"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.635357 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.635397 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.635412 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.635422 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dea7e9-1959-401a-8915-6863b8a3b198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.635432 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxs87\" (UniqueName: \"kubernetes.io/projected/c7dea7e9-1959-401a-8915-6863b8a3b198-kube-api-access-rxs87\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.647824 4681 generic.go:334] "Generic (PLEG): container finished" podID="baf0217f-0783-4d59-81bf-a745d255e69b" containerID="d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" exitCode=0 Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.647927 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baf0217f-0783-4d59-81bf-a745d255e69b","Type":"ContainerDied","Data":"d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec"} Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.647972 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baf0217f-0783-4d59-81bf-a745d255e69b","Type":"ContainerDied","Data":"c11ce482233dc601c122e48b395bcebf9a943de403db72a8e2e55199f920416e"} Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.647967 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.647996 4681 scope.go:117] "RemoveContainer" containerID="d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.662446 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerID="02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb" exitCode=0 Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.662632 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7dea7e9-1959-401a-8915-6863b8a3b198","Type":"ContainerDied","Data":"02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb"} Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.662745 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7dea7e9-1959-401a-8915-6863b8a3b198","Type":"ContainerDied","Data":"eeba0c0008eb61ce0ba6fbd3150c565f03617af912f594164e49492b81b1991d"} Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.663759 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.683279 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a643428-58d9-4480-97ae-945959d1be83","Type":"ContainerStarted","Data":"3fe273de6ef3d0722c31af83fb11eb7c2f52e53dbcf6d9d7e27b461430574e3c"} Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.683324 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a643428-58d9-4480-97ae-945959d1be83","Type":"ContainerStarted","Data":"6b6b9301f9fc08cc7ca10fe0a433257daebb1f7004b886a64660473069843aac"} Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.686354 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.686475 4681 scope.go:117] "RemoveContainer" containerID="d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" Nov 23 07:02:39 crc kubenswrapper[4681]: E1123 07:02:39.686794 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec\": container with ID starting with d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec not found: ID does not exist" containerID="d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.686818 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec"} err="failed to get container status \"d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec\": rpc error: code = NotFound desc = could not find container \"d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec\": container with ID starting with d2e64aacc312c9ee77a9a4736a8b06c74b4e07be2ffa0b03067a6ef1c99485ec not found: ID does not exist" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.686852 4681 scope.go:117] "RemoveContainer" containerID="02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.693055 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.719277 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.719256826 podStartE2EDuration="2.719256826s" podCreationTimestamp="2025-11-23 07:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:39.708785792 +0000 UTC m=+1096.778295029" watchObservedRunningTime="2025-11-23 07:02:39.719256826 +0000 UTC m=+1096.788766052" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.719637 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: E1123 07:02:39.720154 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-api" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.720166 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-api" Nov 23 07:02:39 crc kubenswrapper[4681]: E1123 07:02:39.720174 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf0217f-0783-4d59-81bf-a745d255e69b" containerName="nova-scheduler-scheduler" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.720180 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf0217f-0783-4d59-81bf-a745d255e69b" containerName="nova-scheduler-scheduler" Nov 23 07:02:39 crc kubenswrapper[4681]: E1123 07:02:39.720212 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-log" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.720217 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-log" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.720535 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-api" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.720591 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf0217f-0783-4d59-81bf-a745d255e69b" containerName="nova-scheduler-scheduler" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.720602 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" containerName="nova-api-log" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.722278 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.729297 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.737629 4681 scope.go:117] "RemoveContainer" containerID="b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.739174 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qccp\" (UniqueName: \"kubernetes.io/projected/be49a488-083e-49fb-9e4a-551e1973ca53-kube-api-access-5qccp\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.739322 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49a488-083e-49fb-9e4a-551e1973ca53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.739377 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49a488-083e-49fb-9e4a-551e1973ca53-config-data\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.762183 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.778940 4681 scope.go:117] "RemoveContainer" containerID="02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb" Nov 23 07:02:39 crc kubenswrapper[4681]: E1123 07:02:39.780002 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb\": container with ID starting with 02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb not found: ID does not exist" containerID="02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.780035 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb"} err="failed to get container status \"02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb\": rpc error: code = NotFound desc = could not find container \"02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb\": container with ID starting with 02713589a30cad7b7c87ab2325de600de1421ff9efeb6ba8f0aeae42e8440abb not found: ID does not exist" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.780064 4681 scope.go:117] "RemoveContainer" containerID="b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.780143 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: E1123 07:02:39.780794 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036\": container with ID starting with b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036 not found: ID does not exist" containerID="b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.780858 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036"} err="failed to get container status \"b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036\": rpc error: code = NotFound desc = could not find container \"b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036\": container with ID starting with b0bfbef9752465d84138519f27a21cb7e80c6c47d1eb44c5f8a4969735691036 not found: ID does not exist" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.790419 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.808392 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.810272 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.813788 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.814000 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.814135 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.815234 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.841453 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49a488-083e-49fb-9e4a-551e1973ca53-config-data\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.841589 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-logs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.841697 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-public-tls-certs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.841757 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qccp\" (UniqueName: \"kubernetes.io/projected/be49a488-083e-49fb-9e4a-551e1973ca53-kube-api-access-5qccp\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.841868 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck494\" (UniqueName: \"kubernetes.io/projected/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-kube-api-access-ck494\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.842007 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.842082 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.842284 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-config-data\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.842351 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49a488-083e-49fb-9e4a-551e1973ca53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.850144 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49a488-083e-49fb-9e4a-551e1973ca53-config-data\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.850154 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49a488-083e-49fb-9e4a-551e1973ca53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.855024 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qccp\" (UniqueName: \"kubernetes.io/projected/be49a488-083e-49fb-9e4a-551e1973ca53-kube-api-access-5qccp\") pod \"nova-scheduler-0\" (UID: \"be49a488-083e-49fb-9e4a-551e1973ca53\") " pod="openstack/nova-scheduler-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.945051 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck494\" (UniqueName: \"kubernetes.io/projected/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-kube-api-access-ck494\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.945119 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.945150 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.945201 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-config-data\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.945285 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-logs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.945332 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-public-tls-certs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.946352 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-logs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.950168 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.950409 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.951323 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-public-tls-certs\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.951436 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-config-data\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:39 crc kubenswrapper[4681]: I1123 07:02:39.964237 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck494\" (UniqueName: \"kubernetes.io/projected/f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92-kube-api-access-ck494\") pod \"nova-api-0\" (UID: \"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92\") " pod="openstack/nova-api-0" Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.046453 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.132682 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.480804 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:02:40 crc kubenswrapper[4681]: W1123 07:02:40.481845 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe49a488_083e_49fb_9e4a_551e1973ca53.slice/crio-75690a3aa415e9c68dd9f0d1d8e185b6712ba1d06c272ece8d15d1306c902b08 WatchSource:0}: Error finding container 75690a3aa415e9c68dd9f0d1d8e185b6712ba1d06c272ece8d15d1306c902b08: Status 404 returned error can't find the container with id 75690a3aa415e9c68dd9f0d1d8e185b6712ba1d06c272ece8d15d1306c902b08 Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.639511 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.705644 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"be49a488-083e-49fb-9e4a-551e1973ca53","Type":"ContainerStarted","Data":"fab832d9b9e4f9169755be1702fec170f1f30b286727dc7ad6687ff855a25cea"} Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.705705 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"be49a488-083e-49fb-9e4a-551e1973ca53","Type":"ContainerStarted","Data":"75690a3aa415e9c68dd9f0d1d8e185b6712ba1d06c272ece8d15d1306c902b08"} Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.708967 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92","Type":"ContainerStarted","Data":"228aa301991a19ce9b0656f3ac17bb8d0f5244572cfff90f1114cbfbef7a6559"} Nov 23 07:02:40 crc kubenswrapper[4681]: I1123 07:02:40.730727 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.730686776 podStartE2EDuration="1.730686776s" podCreationTimestamp="2025-11-23 07:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:40.725247015 +0000 UTC m=+1097.794756252" watchObservedRunningTime="2025-11-23 07:02:40.730686776 +0000 UTC m=+1097.800196003" Nov 23 07:02:41 crc kubenswrapper[4681]: I1123 07:02:41.262262 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf0217f-0783-4d59-81bf-a745d255e69b" path="/var/lib/kubelet/pods/baf0217f-0783-4d59-81bf-a745d255e69b/volumes" Nov 23 07:02:41 crc kubenswrapper[4681]: I1123 07:02:41.263215 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7dea7e9-1959-401a-8915-6863b8a3b198" path="/var/lib/kubelet/pods/c7dea7e9-1959-401a-8915-6863b8a3b198/volumes" Nov 23 07:02:41 crc kubenswrapper[4681]: I1123 07:02:41.719841 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92","Type":"ContainerStarted","Data":"31128537abf13b5451850139f7fbc794ac5412c26c397dcd9b45dcab848a0592"} Nov 23 07:02:41 crc kubenswrapper[4681]: I1123 07:02:41.719898 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92","Type":"ContainerStarted","Data":"df53dae5498eacb837335bfd942481f56aaef4234136b7d1899da18e76ea5e14"} Nov 23 07:02:41 crc kubenswrapper[4681]: I1123 07:02:41.741212 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7411966530000003 podStartE2EDuration="2.741196653s" podCreationTimestamp="2025-11-23 07:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:41.734508319 +0000 UTC m=+1098.804017556" watchObservedRunningTime="2025-11-23 07:02:41.741196653 +0000 UTC m=+1098.810705890" Nov 23 07:02:42 crc kubenswrapper[4681]: I1123 07:02:42.296203 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:02:42 crc kubenswrapper[4681]: I1123 07:02:42.296278 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:02:43 crc kubenswrapper[4681]: I1123 07:02:43.038596 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:02:43 crc kubenswrapper[4681]: I1123 07:02:43.038997 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:02:45 crc kubenswrapper[4681]: I1123 07:02:45.046945 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:02:48 crc kubenswrapper[4681]: I1123 07:02:48.038966 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:02:48 crc kubenswrapper[4681]: I1123 07:02:48.039593 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:02:49 crc kubenswrapper[4681]: I1123 07:02:49.057603 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6a643428-58d9-4480-97ae-945959d1be83" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:49 crc kubenswrapper[4681]: I1123 07:02:49.057696 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6a643428-58d9-4480-97ae-945959d1be83" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:50 crc kubenswrapper[4681]: I1123 07:02:50.046770 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:02:50 crc kubenswrapper[4681]: I1123 07:02:50.075425 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:02:50 crc kubenswrapper[4681]: I1123 07:02:50.133508 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:02:50 crc kubenswrapper[4681]: I1123 07:02:50.133630 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:02:50 crc kubenswrapper[4681]: I1123 07:02:50.845763 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:02:51 crc kubenswrapper[4681]: I1123 07:02:51.147586 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:51 crc kubenswrapper[4681]: I1123 07:02:51.147586 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2e5ac0f-ae8d-4814-b3b9-2dc5b2614f92" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:02:54 crc kubenswrapper[4681]: I1123 07:02:54.752346 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:02:58 crc kubenswrapper[4681]: I1123 07:02:58.045378 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:02:58 crc kubenswrapper[4681]: I1123 07:02:58.046948 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:02:58 crc kubenswrapper[4681]: I1123 07:02:58.051111 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:02:58 crc kubenswrapper[4681]: I1123 07:02:58.053025 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:03:00 crc kubenswrapper[4681]: I1123 07:03:00.140933 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:03:00 crc kubenswrapper[4681]: I1123 07:03:00.141241 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:03:00 crc kubenswrapper[4681]: I1123 07:03:00.141617 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:03:00 crc kubenswrapper[4681]: I1123 07:03:00.145790 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:03:00 crc kubenswrapper[4681]: I1123 07:03:00.923182 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:03:00 crc kubenswrapper[4681]: I1123 07:03:00.929559 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:03:07 crc kubenswrapper[4681]: I1123 07:03:07.447534 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:03:08 crc kubenswrapper[4681]: I1123 07:03:08.369918 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:03:12 crc kubenswrapper[4681]: I1123 07:03:12.296100 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:03:12 crc kubenswrapper[4681]: I1123 07:03:12.296887 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:03:12 crc kubenswrapper[4681]: I1123 07:03:12.450552 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerName="rabbitmq" containerID="cri-o://81de7e7395ab8b3c753cb319772266a2f7aa9cd6d297a5e0aecfe387311d1ce2" gracePeriod=604795 Nov 23 07:03:12 crc kubenswrapper[4681]: I1123 07:03:12.869961 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6e2ff794-284c-406f-a815-9efec112c044" containerName="rabbitmq" containerID="cri-o://f971ee3492a2f5eef0fd4413c18e2866d5d8f50d0960e4292e7667e8ea5ec95e" gracePeriod=604796 Nov 23 07:03:14 crc kubenswrapper[4681]: I1123 07:03:14.013014 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Nov 23 07:03:14 crc kubenswrapper[4681]: I1123 07:03:14.088799 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6e2ff794-284c-406f-a815-9efec112c044" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.119947 4681 generic.go:334] "Generic (PLEG): container finished" podID="6e2ff794-284c-406f-a815-9efec112c044" containerID="f971ee3492a2f5eef0fd4413c18e2866d5d8f50d0960e4292e7667e8ea5ec95e" exitCode=0 Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.120049 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6e2ff794-284c-406f-a815-9efec112c044","Type":"ContainerDied","Data":"f971ee3492a2f5eef0fd4413c18e2866d5d8f50d0960e4292e7667e8ea5ec95e"} Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.139781 4681 generic.go:334] "Generic (PLEG): container finished" podID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerID="81de7e7395ab8b3c753cb319772266a2f7aa9cd6d297a5e0aecfe387311d1ce2" exitCode=0 Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.139827 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e93be3c-dcb6-4105-868c-645d5c8c7bd0","Type":"ContainerDied","Data":"81de7e7395ab8b3c753cb319772266a2f7aa9cd6d297a5e0aecfe387311d1ce2"} Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.139865 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e93be3c-dcb6-4105-868c-645d5c8c7bd0","Type":"ContainerDied","Data":"0c309c3a2fb20d9e6cd6dfa22c3e9f499bf0c80807912b6c62c79c16fb7e09b6"} Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.139878 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c309c3a2fb20d9e6cd6dfa22c3e9f499bf0c80807912b6c62c79c16fb7e09b6" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.140779 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185327 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-pod-info\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185409 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185476 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-config-data\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185522 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh2bt\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-kube-api-access-dh2bt\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185596 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-plugins-conf\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185659 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-confd\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185736 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-erlang-cookie\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185763 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-server-conf\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185796 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-erlang-cookie-secret\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185817 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-tls\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.185840 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-plugins\") pod \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\" (UID: \"7e93be3c-dcb6-4105-868c-645d5c8c7bd0\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.188025 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.188487 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.199580 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.237346 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.239383 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.242207 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-kube-api-access-dh2bt" (OuterVolumeSpecName: "kube-api-access-dh2bt") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "kube-api-access-dh2bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.242827 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-pod-info" (OuterVolumeSpecName: "pod-info") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.257696 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295787 4681 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295823 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295837 4681 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295852 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295863 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295872 4681 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295901 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.295911 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh2bt\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-kube-api-access-dh2bt\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.315501 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-server-conf" (OuterVolumeSpecName: "server-conf") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.355701 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-config-data" (OuterVolumeSpecName: "config-data") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.355833 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f84b549df-l8cnk"] Nov 23 07:03:19 crc kubenswrapper[4681]: E1123 07:03:19.356274 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerName="rabbitmq" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.356294 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerName="rabbitmq" Nov 23 07:03:19 crc kubenswrapper[4681]: E1123 07:03:19.356336 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerName="setup-container" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.356343 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerName="setup-container" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.356584 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" containerName="rabbitmq" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.357786 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f84b549df-l8cnk"] Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.357890 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.364413 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.365730 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398154 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398262 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398385 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-swift-storage-0\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398454 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-sb\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398514 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-config\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398555 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-svc\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398613 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczc8\" (UniqueName: \"kubernetes.io/projected/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-kube-api-access-lczc8\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398690 4681 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398703 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.398712 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.482178 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f84b549df-l8cnk"] Nov 23 07:03:19 crc kubenswrapper[4681]: E1123 07:03:19.483362 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-lczc8 openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" podUID="5dcb31c3-e43b-46a9-85c1-531c6a4866ba" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.499156 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7e93be3c-dcb6-4105-868c-645d5c8c7bd0" (UID: "7e93be3c-dcb6-4105-868c-645d5c8c7bd0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.500844 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczc8\" (UniqueName: \"kubernetes.io/projected/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-kube-api-access-lczc8\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.501046 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.501163 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.501292 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-swift-storage-0\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.501398 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-sb\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.501496 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-config\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.501598 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-svc\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.501715 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e93be3c-dcb6-4105-868c-645d5c8c7bd0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.502544 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-sb\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.502834 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-config\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.503413 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.505765 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-swift-storage-0\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.506321 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.508286 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-svc\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.509086 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-597f78bb47-rlqf8"] Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.511054 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.520837 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczc8\" (UniqueName: \"kubernetes.io/projected/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-kube-api-access-lczc8\") pod \"dnsmasq-dns-7f84b549df-l8cnk\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.526026 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-597f78bb47-rlqf8"] Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.648520 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729167 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729276 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24tjj\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-kube-api-access-24tjj\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729418 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-plugins\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729486 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e2ff794-284c-406f-a815-9efec112c044-erlang-cookie-secret\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729612 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-erlang-cookie\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729662 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-tls\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729724 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-plugins-conf\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729785 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e2ff794-284c-406f-a815-9efec112c044-pod-info\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729876 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-server-conf\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.729917 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-confd\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.730111 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-config-data\") pod \"6e2ff794-284c-406f-a815-9efec112c044\" (UID: \"6e2ff794-284c-406f-a815-9efec112c044\") " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.730738 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbjl\" (UniqueName: \"kubernetes.io/projected/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-kube-api-access-svbjl\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.730891 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-dns-svc\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.730949 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.731022 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-ovsdbserver-nb\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.731055 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-dns-swift-storage-0\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.731079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-config\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.731106 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-ovsdbserver-sb\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.732687 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.734665 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.742614 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.748792 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.756322 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.756471 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6e2ff794-284c-406f-a815-9efec112c044-pod-info" (OuterVolumeSpecName: "pod-info") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.756616 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-kube-api-access-24tjj" (OuterVolumeSpecName: "kube-api-access-24tjj") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "kube-api-access-24tjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.761131 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e2ff794-284c-406f-a815-9efec112c044-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.800474 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-config-data" (OuterVolumeSpecName: "config-data") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835484 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-ovsdbserver-nb\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835563 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-dns-swift-storage-0\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835610 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-config\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835641 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-ovsdbserver-sb\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835685 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svbjl\" (UniqueName: \"kubernetes.io/projected/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-kube-api-access-svbjl\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835784 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-dns-svc\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835927 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835950 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24tjj\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-kube-api-access-24tjj\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835968 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835982 4681 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e2ff794-284c-406f-a815-9efec112c044-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.835994 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.836003 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.836011 4681 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.836021 4681 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e2ff794-284c-406f-a815-9efec112c044-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.836031 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.836622 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-dns-swift-storage-0\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.837220 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-ovsdbserver-nb\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.838095 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-config\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.838429 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-ovsdbserver-sb\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.838693 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-dns-svc\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.839093 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.852648 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-server-conf" (OuterVolumeSpecName: "server-conf") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.858894 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svbjl\" (UniqueName: \"kubernetes.io/projected/9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0-kube-api-access-svbjl\") pod \"dnsmasq-dns-597f78bb47-rlqf8\" (UID: \"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0\") " pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.884679 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.937933 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.938178 4681 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e2ff794-284c-406f-a815-9efec112c044-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:19 crc kubenswrapper[4681]: I1123 07:03:19.991505 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6e2ff794-284c-406f-a815-9efec112c044" (UID: "6e2ff794-284c-406f-a815-9efec112c044"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.041274 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e2ff794-284c-406f-a815-9efec112c044-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.142322 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.156058 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.156578 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.156703 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.156978 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6e2ff794-284c-406f-a815-9efec112c044","Type":"ContainerDied","Data":"19a56aa27c4b87fe8c83bd0ac06d5484b097b4e704d9fab5a438039e97e589c2"} Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.157032 4681 scope.go:117] "RemoveContainer" containerID="f971ee3492a2f5eef0fd4413c18e2866d5d8f50d0960e4292e7667e8ea5ec95e" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.181141 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.200353 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.208224 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.220971 4681 scope.go:117] "RemoveContainer" containerID="64896ff51779c881bb9362fcb20885bfa0830579b3c1525ff8fb8d8cb254da13" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.222042 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.235228 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.241559 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: E1123 07:03:20.242019 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2ff794-284c-406f-a815-9efec112c044" containerName="rabbitmq" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.242040 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2ff794-284c-406f-a815-9efec112c044" containerName="rabbitmq" Nov 23 07:03:20 crc kubenswrapper[4681]: E1123 07:03:20.242055 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2ff794-284c-406f-a815-9efec112c044" containerName="setup-container" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.242061 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2ff794-284c-406f-a815-9efec112c044" containerName="setup-container" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.242274 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2ff794-284c-406f-a815-9efec112c044" containerName="rabbitmq" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.243251 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.247647 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-openstack-edpm-ipam\") pod \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.247702 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-sb\") pod \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.247764 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-swift-storage-0\") pod \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.247815 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lczc8\" (UniqueName: \"kubernetes.io/projected/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-kube-api-access-lczc8\") pod \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.247889 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-config\") pod \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.248060 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-svc\") pod \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.249313 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xqkwc" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.249514 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.249635 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.249763 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.250081 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.250345 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5dcb31c3-e43b-46a9-85c1-531c6a4866ba" (UID: "5dcb31c3-e43b-46a9-85c1-531c6a4866ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.250805 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5dcb31c3-e43b-46a9-85c1-531c6a4866ba" (UID: "5dcb31c3-e43b-46a9-85c1-531c6a4866ba"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.251604 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5dcb31c3-e43b-46a9-85c1-531c6a4866ba" (UID: "5dcb31c3-e43b-46a9-85c1-531c6a4866ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.251935 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-config" (OuterVolumeSpecName: "config") pod "5dcb31c3-e43b-46a9-85c1-531c6a4866ba" (UID: "5dcb31c3-e43b-46a9-85c1-531c6a4866ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.253892 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5dcb31c3-e43b-46a9-85c1-531c6a4866ba" (UID: "5dcb31c3-e43b-46a9-85c1-531c6a4866ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.254059 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-nb\") pod \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\" (UID: \"5dcb31c3-e43b-46a9-85c1-531c6a4866ba\") " Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.256437 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5dcb31c3-e43b-46a9-85c1-531c6a4866ba" (UID: "5dcb31c3-e43b-46a9-85c1-531c6a4866ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.258698 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-kube-api-access-lczc8" (OuterVolumeSpecName: "kube-api-access-lczc8") pod "5dcb31c3-e43b-46a9-85c1-531c6a4866ba" (UID: "5dcb31c3-e43b-46a9-85c1-531c6a4866ba"). InnerVolumeSpecName "kube-api-access-lczc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.259002 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.259022 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.259031 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.259040 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.259051 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.259058 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.268264 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.268487 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.283746 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.290964 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.294318 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.294569 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-n52w4" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.294686 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.294801 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.294914 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.300413 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.300498 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.303403 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.315307 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.360916 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.360992 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361031 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361066 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361098 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361223 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/696ebc04-4784-4c41-afa2-5ed315cd25e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361287 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361327 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361347 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/04d51566-75c4-4ebe-a907-2941703a952e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361396 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361432 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sb7j\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-kube-api-access-2sb7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361485 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/04d51566-75c4-4ebe-a907-2941703a952e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361512 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361613 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361650 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/696ebc04-4784-4c41-afa2-5ed315cd25e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361686 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361716 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361767 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-config-data\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361864 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lqp\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-kube-api-access-c7lqp\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361889 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361919 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.361998 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lczc8\" (UniqueName: \"kubernetes.io/projected/5dcb31c3-e43b-46a9-85c1-531c6a4866ba-kube-api-access-lczc8\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465114 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465171 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/696ebc04-4784-4c41-afa2-5ed315cd25e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465207 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465262 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-config-data\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465293 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7lqp\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-kube-api-access-c7lqp\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465313 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465338 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465363 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465427 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465454 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.465939 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466268 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466302 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/696ebc04-4784-4c41-afa2-5ed315cd25e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466333 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466353 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466372 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/04d51566-75c4-4ebe-a907-2941703a952e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466393 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466417 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466435 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sb7j\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-kube-api-access-2sb7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466472 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/04d51566-75c4-4ebe-a907-2941703a952e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.466507 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.467226 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.473713 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.476152 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.477689 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.485183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.485250 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.485327 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.486849 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/696ebc04-4784-4c41-afa2-5ed315cd25e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.487387 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-config-data\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.487689 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.497018 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.498285 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.498644 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/696ebc04-4784-4c41-afa2-5ed315cd25e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.502051 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sb7j\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-kube-api-access-2sb7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.504886 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/04d51566-75c4-4ebe-a907-2941703a952e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.508035 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/04d51566-75c4-4ebe-a907-2941703a952e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.525242 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/696ebc04-4784-4c41-afa2-5ed315cd25e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.527131 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.530478 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/696ebc04-4784-4c41-afa2-5ed315cd25e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.543156 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7lqp\" (UniqueName: \"kubernetes.io/projected/04d51566-75c4-4ebe-a907-2941703a952e-kube-api-access-c7lqp\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.543547 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/04d51566-75c4-4ebe-a907-2941703a952e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.580385 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696ebc04-4784-4c41-afa2-5ed315cd25e7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.597361 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"04d51566-75c4-4ebe-a907-2941703a952e\") " pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.615736 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.741109 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-597f78bb47-rlqf8"] Nov 23 07:03:20 crc kubenswrapper[4681]: I1123 07:03:20.869690 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.107907 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:03:21 crc kubenswrapper[4681]: W1123 07:03:21.135065 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04d51566_75c4_4ebe_a907_2941703a952e.slice/crio-c1cb95aff0c727a45bfd57e8d9bd6e381789bdb84cdfcc17750a68be9d5fb813 WatchSource:0}: Error finding container c1cb95aff0c727a45bfd57e8d9bd6e381789bdb84cdfcc17750a68be9d5fb813: Status 404 returned error can't find the container with id c1cb95aff0c727a45bfd57e8d9bd6e381789bdb84cdfcc17750a68be9d5fb813 Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.186398 4681 generic.go:334] "Generic (PLEG): container finished" podID="9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0" containerID="3a2b8bc64257c2f7edaa781f9637381198b8c63084cbf80da7cd012435302786" exitCode=0 Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.186482 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" event={"ID":"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0","Type":"ContainerDied","Data":"3a2b8bc64257c2f7edaa781f9637381198b8c63084cbf80da7cd012435302786"} Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.186512 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" event={"ID":"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0","Type":"ContainerStarted","Data":"250a1c7f204773d6e965c6e45a8d12e0033fc9a9fa2efe70e777db37693e4dfd"} Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.190417 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f84b549df-l8cnk" Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.192811 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"04d51566-75c4-4ebe-a907-2941703a952e","Type":"ContainerStarted","Data":"c1cb95aff0c727a45bfd57e8d9bd6e381789bdb84cdfcc17750a68be9d5fb813"} Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.273497 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2ff794-284c-406f-a815-9efec112c044" path="/var/lib/kubelet/pods/6e2ff794-284c-406f-a815-9efec112c044/volumes" Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.275018 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e93be3c-dcb6-4105-868c-645d5c8c7bd0" path="/var/lib/kubelet/pods/7e93be3c-dcb6-4105-868c-645d5c8c7bd0/volumes" Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.276523 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f84b549df-l8cnk"] Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.276555 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f84b549df-l8cnk"] Nov 23 07:03:21 crc kubenswrapper[4681]: I1123 07:03:21.424837 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:03:21 crc kubenswrapper[4681]: W1123 07:03:21.428765 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696ebc04_4784_4c41_afa2_5ed315cd25e7.slice/crio-8f2faab434e475988fd2fb5eef15103c2e1188a87410662ba09797a357d52661 WatchSource:0}: Error finding container 8f2faab434e475988fd2fb5eef15103c2e1188a87410662ba09797a357d52661: Status 404 returned error can't find the container with id 8f2faab434e475988fd2fb5eef15103c2e1188a87410662ba09797a357d52661 Nov 23 07:03:22 crc kubenswrapper[4681]: I1123 07:03:22.204084 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" event={"ID":"9d0567c6-b6e7-4b54-9f6c-84a32dcc39e0","Type":"ContainerStarted","Data":"72d16711e9005ac049a8313fc1babcc83e9a794e221cd4a1d399ffb37c38a6a3"} Nov 23 07:03:22 crc kubenswrapper[4681]: I1123 07:03:22.204543 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:22 crc kubenswrapper[4681]: I1123 07:03:22.206643 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696ebc04-4784-4c41-afa2-5ed315cd25e7","Type":"ContainerStarted","Data":"8f2faab434e475988fd2fb5eef15103c2e1188a87410662ba09797a357d52661"} Nov 23 07:03:22 crc kubenswrapper[4681]: I1123 07:03:22.230339 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" podStartSLOduration=3.230323642 podStartE2EDuration="3.230323642s" podCreationTimestamp="2025-11-23 07:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:03:22.225131547 +0000 UTC m=+1139.294640784" watchObservedRunningTime="2025-11-23 07:03:22.230323642 +0000 UTC m=+1139.299832878" Nov 23 07:03:23 crc kubenswrapper[4681]: I1123 07:03:23.218756 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"04d51566-75c4-4ebe-a907-2941703a952e","Type":"ContainerStarted","Data":"ce122af2053ebdffd32c9a54be913f821785ca1b6a6377130c34639929ec60b9"} Nov 23 07:03:23 crc kubenswrapper[4681]: I1123 07:03:23.222145 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696ebc04-4784-4c41-afa2-5ed315cd25e7","Type":"ContainerStarted","Data":"fc371f532998f69c3b5d7c93763d5207b34e01f23192884f3589a5a74b1618e6"} Nov 23 07:03:23 crc kubenswrapper[4681]: I1123 07:03:23.300692 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dcb31c3-e43b-46a9-85c1-531c6a4866ba" path="/var/lib/kubelet/pods/5dcb31c3-e43b-46a9-85c1-531c6a4866ba/volumes" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.144436 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-597f78bb47-rlqf8" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.205182 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7447b889c5-q9wld"] Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.205442 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" podUID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerName="dnsmasq-dns" containerID="cri-o://ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2" gracePeriod=10 Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.623726 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.729133 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-nb\") pod \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.729319 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-sb\") pod \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.729408 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-svc\") pod \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.729477 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzx5\" (UniqueName: \"kubernetes.io/projected/2c2819f3-3efa-41bf-8168-4958cf2bcd15-kube-api-access-nzzx5\") pod \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.729512 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-swift-storage-0\") pod \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.729639 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-config\") pod \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\" (UID: \"2c2819f3-3efa-41bf-8168-4958cf2bcd15\") " Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.740814 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2819f3-3efa-41bf-8168-4958cf2bcd15-kube-api-access-nzzx5" (OuterVolumeSpecName: "kube-api-access-nzzx5") pod "2c2819f3-3efa-41bf-8168-4958cf2bcd15" (UID: "2c2819f3-3efa-41bf-8168-4958cf2bcd15"). InnerVolumeSpecName "kube-api-access-nzzx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.777198 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c2819f3-3efa-41bf-8168-4958cf2bcd15" (UID: "2c2819f3-3efa-41bf-8168-4958cf2bcd15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.778796 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-config" (OuterVolumeSpecName: "config") pod "2c2819f3-3efa-41bf-8168-4958cf2bcd15" (UID: "2c2819f3-3efa-41bf-8168-4958cf2bcd15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.792909 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c2819f3-3efa-41bf-8168-4958cf2bcd15" (UID: "2c2819f3-3efa-41bf-8168-4958cf2bcd15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.801675 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c2819f3-3efa-41bf-8168-4958cf2bcd15" (UID: "2c2819f3-3efa-41bf-8168-4958cf2bcd15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.807914 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c2819f3-3efa-41bf-8168-4958cf2bcd15" (UID: "2c2819f3-3efa-41bf-8168-4958cf2bcd15"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.833443 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.833486 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.833498 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.833514 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzx5\" (UniqueName: \"kubernetes.io/projected/2c2819f3-3efa-41bf-8168-4958cf2bcd15-kube-api-access-nzzx5\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.833528 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:30 crc kubenswrapper[4681]: I1123 07:03:30.833539 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2819f3-3efa-41bf-8168-4958cf2bcd15-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.301987 4681 generic.go:334] "Generic (PLEG): container finished" podID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerID="ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2" exitCode=0 Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.302080 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" event={"ID":"2c2819f3-3efa-41bf-8168-4958cf2bcd15","Type":"ContainerDied","Data":"ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2"} Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.302412 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" event={"ID":"2c2819f3-3efa-41bf-8168-4958cf2bcd15","Type":"ContainerDied","Data":"727cd8d1121ba2b2004a1a6ff61a1790851b23649e88e6990e60d33178b78f6e"} Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.302443 4681 scope.go:117] "RemoveContainer" containerID="ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2" Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.302109 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7447b889c5-q9wld" Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.333612 4681 scope.go:117] "RemoveContainer" containerID="3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12" Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.339425 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7447b889c5-q9wld"] Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.348833 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7447b889c5-q9wld"] Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.367842 4681 scope.go:117] "RemoveContainer" containerID="ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2" Nov 23 07:03:31 crc kubenswrapper[4681]: E1123 07:03:31.368344 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2\": container with ID starting with ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2 not found: ID does not exist" containerID="ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2" Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.368382 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2"} err="failed to get container status \"ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2\": rpc error: code = NotFound desc = could not find container \"ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2\": container with ID starting with ce2f3a65421e406710ad7d2f138de92205da051cdef6e405d14f7a9c5d9abfb2 not found: ID does not exist" Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.368411 4681 scope.go:117] "RemoveContainer" containerID="3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12" Nov 23 07:03:31 crc kubenswrapper[4681]: E1123 07:03:31.368854 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12\": container with ID starting with 3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12 not found: ID does not exist" containerID="3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12" Nov 23 07:03:31 crc kubenswrapper[4681]: I1123 07:03:31.368888 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12"} err="failed to get container status \"3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12\": rpc error: code = NotFound desc = could not find container \"3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12\": container with ID starting with 3d76f96e0a0ba0b1ec4bd32cbf785edd18e7f79bffefd324e832bdeddb2feb12 not found: ID does not exist" Nov 23 07:03:33 crc kubenswrapper[4681]: I1123 07:03:33.262605 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" path="/var/lib/kubelet/pods/2c2819f3-3efa-41bf-8168-4958cf2bcd15/volumes" Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.296014 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.296778 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.296853 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.297443 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b6565a11ae3d1b82169df41e725361a82bf48f3c4a16c6cf3c1e136bf571ba8"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.297525 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://6b6565a11ae3d1b82169df41e725361a82bf48f3c4a16c6cf3c1e136bf571ba8" gracePeriod=600 Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.428289 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="6b6565a11ae3d1b82169df41e725361a82bf48f3c4a16c6cf3c1e136bf571ba8" exitCode=0 Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.428550 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"6b6565a11ae3d1b82169df41e725361a82bf48f3c4a16c6cf3c1e136bf571ba8"} Nov 23 07:03:42 crc kubenswrapper[4681]: I1123 07:03:42.428588 4681 scope.go:117] "RemoveContainer" containerID="caed7cef552031860d421f500f9694e60cb9adcf543f62d9378ea4360e6a8866" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.394245 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd"] Nov 23 07:03:43 crc kubenswrapper[4681]: E1123 07:03:43.394943 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerName="dnsmasq-dns" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.394958 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerName="dnsmasq-dns" Nov 23 07:03:43 crc kubenswrapper[4681]: E1123 07:03:43.394978 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerName="init" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.394985 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerName="init" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.395200 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2819f3-3efa-41bf-8168-4958cf2bcd15" containerName="dnsmasq-dns" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.395895 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.399146 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.399315 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.400494 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.401326 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.408788 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd"] Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.434118 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.434313 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.434364 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjk4\" (UniqueName: \"kubernetes.io/projected/55cf8c9f-5947-452d-bbfd-346971cdf8ac-kube-api-access-qcjk4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.434418 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.444494 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"411d710baa479cd25651d571408d129f643d8f5da14108264248611d2aa6b0dc"} Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.536202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.536486 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjk4\" (UniqueName: \"kubernetes.io/projected/55cf8c9f-5947-452d-bbfd-346971cdf8ac-kube-api-access-qcjk4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.536646 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.536833 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.541819 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.543384 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.547711 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.551899 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjk4\" (UniqueName: \"kubernetes.io/projected/55cf8c9f-5947-452d-bbfd-346971cdf8ac-kube-api-access-qcjk4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:43 crc kubenswrapper[4681]: I1123 07:03:43.716247 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:03:44 crc kubenswrapper[4681]: I1123 07:03:44.359836 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:03:44 crc kubenswrapper[4681]: I1123 07:03:44.360920 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd"] Nov 23 07:03:44 crc kubenswrapper[4681]: I1123 07:03:44.455502 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" event={"ID":"55cf8c9f-5947-452d-bbfd-346971cdf8ac","Type":"ContainerStarted","Data":"b07454ac89e39433a49501889a8913054d4aa9b01695f36e2fad1527f949588b"} Nov 23 07:03:54 crc kubenswrapper[4681]: I1123 07:03:54.613880 4681 generic.go:334] "Generic (PLEG): container finished" podID="04d51566-75c4-4ebe-a907-2941703a952e" containerID="ce122af2053ebdffd32c9a54be913f821785ca1b6a6377130c34639929ec60b9" exitCode=0 Nov 23 07:03:54 crc kubenswrapper[4681]: I1123 07:03:54.614495 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"04d51566-75c4-4ebe-a907-2941703a952e","Type":"ContainerDied","Data":"ce122af2053ebdffd32c9a54be913f821785ca1b6a6377130c34639929ec60b9"} Nov 23 07:03:54 crc kubenswrapper[4681]: I1123 07:03:54.618289 4681 generic.go:334] "Generic (PLEG): container finished" podID="696ebc04-4784-4c41-afa2-5ed315cd25e7" containerID="fc371f532998f69c3b5d7c93763d5207b34e01f23192884f3589a5a74b1618e6" exitCode=0 Nov 23 07:03:54 crc kubenswrapper[4681]: I1123 07:03:54.618341 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696ebc04-4784-4c41-afa2-5ed315cd25e7","Type":"ContainerDied","Data":"fc371f532998f69c3b5d7c93763d5207b34e01f23192884f3589a5a74b1618e6"} Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.626627 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" event={"ID":"55cf8c9f-5947-452d-bbfd-346971cdf8ac","Type":"ContainerStarted","Data":"c1d695abf1253728e274b628aa1dd8192dcf75b2baf61819a23d044dfe0cefb1"} Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.629444 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"04d51566-75c4-4ebe-a907-2941703a952e","Type":"ContainerStarted","Data":"2c78ee5a69ccd9488971df580ede3659f9f1568d4781b60c1143343e7ce66ca6"} Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.629914 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.631260 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696ebc04-4784-4c41-afa2-5ed315cd25e7","Type":"ContainerStarted","Data":"0081610f7ee784ef85c373f3a8356d39da2729143311844c920942cc4c9bd80b"} Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.631648 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.652781 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" podStartSLOduration=2.438713185 podStartE2EDuration="12.652769949s" podCreationTimestamp="2025-11-23 07:03:43 +0000 UTC" firstStartedPulling="2025-11-23 07:03:44.359287444 +0000 UTC m=+1161.428796680" lastFinishedPulling="2025-11-23 07:03:54.573344207 +0000 UTC m=+1171.642853444" observedRunningTime="2025-11-23 07:03:55.644150866 +0000 UTC m=+1172.713660093" watchObservedRunningTime="2025-11-23 07:03:55.652769949 +0000 UTC m=+1172.722279186" Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.736609 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.736588786 podStartE2EDuration="35.736588786s" podCreationTimestamp="2025-11-23 07:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:03:55.703132418 +0000 UTC m=+1172.772641655" watchObservedRunningTime="2025-11-23 07:03:55.736588786 +0000 UTC m=+1172.806098024" Nov 23 07:03:55 crc kubenswrapper[4681]: I1123 07:03:55.738344 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.738333004 podStartE2EDuration="35.738333004s" podCreationTimestamp="2025-11-23 07:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:03:55.734115305 +0000 UTC m=+1172.803624532" watchObservedRunningTime="2025-11-23 07:03:55.738333004 +0000 UTC m=+1172.807842241" Nov 23 07:04:06 crc kubenswrapper[4681]: I1123 07:04:06.753526 4681 generic.go:334] "Generic (PLEG): container finished" podID="55cf8c9f-5947-452d-bbfd-346971cdf8ac" containerID="c1d695abf1253728e274b628aa1dd8192dcf75b2baf61819a23d044dfe0cefb1" exitCode=0 Nov 23 07:04:06 crc kubenswrapper[4681]: I1123 07:04:06.753640 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" event={"ID":"55cf8c9f-5947-452d-bbfd-346971cdf8ac","Type":"ContainerDied","Data":"c1d695abf1253728e274b628aa1dd8192dcf75b2baf61819a23d044dfe0cefb1"} Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.254144 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.355700 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcjk4\" (UniqueName: \"kubernetes.io/projected/55cf8c9f-5947-452d-bbfd-346971cdf8ac-kube-api-access-qcjk4\") pod \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.355862 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-repo-setup-combined-ca-bundle\") pod \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.355981 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-inventory\") pod \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.356036 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-ssh-key\") pod \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\" (UID: \"55cf8c9f-5947-452d-bbfd-346971cdf8ac\") " Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.372604 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "55cf8c9f-5947-452d-bbfd-346971cdf8ac" (UID: "55cf8c9f-5947-452d-bbfd-346971cdf8ac"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.387179 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55cf8c9f-5947-452d-bbfd-346971cdf8ac-kube-api-access-qcjk4" (OuterVolumeSpecName: "kube-api-access-qcjk4") pod "55cf8c9f-5947-452d-bbfd-346971cdf8ac" (UID: "55cf8c9f-5947-452d-bbfd-346971cdf8ac"). InnerVolumeSpecName "kube-api-access-qcjk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.399266 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "55cf8c9f-5947-452d-bbfd-346971cdf8ac" (UID: "55cf8c9f-5947-452d-bbfd-346971cdf8ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.403452 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-inventory" (OuterVolumeSpecName: "inventory") pod "55cf8c9f-5947-452d-bbfd-346971cdf8ac" (UID: "55cf8c9f-5947-452d-bbfd-346971cdf8ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.460237 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcjk4\" (UniqueName: \"kubernetes.io/projected/55cf8c9f-5947-452d-bbfd-346971cdf8ac-kube-api-access-qcjk4\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.460266 4681 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.460280 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.460291 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55cf8c9f-5947-452d-bbfd-346971cdf8ac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.775373 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" event={"ID":"55cf8c9f-5947-452d-bbfd-346971cdf8ac","Type":"ContainerDied","Data":"b07454ac89e39433a49501889a8913054d4aa9b01695f36e2fad1527f949588b"} Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.775766 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mbnzd" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.775782 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07454ac89e39433a49501889a8913054d4aa9b01695f36e2fad1527f949588b" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.855488 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4"] Nov 23 07:04:08 crc kubenswrapper[4681]: E1123 07:04:08.855885 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cf8c9f-5947-452d-bbfd-346971cdf8ac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.855904 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cf8c9f-5947-452d-bbfd-346971cdf8ac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.856070 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cf8c9f-5947-452d-bbfd-346971cdf8ac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.856699 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.861045 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.861206 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.861349 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.869415 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.871021 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4"] Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.971111 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.971219 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dth6v\" (UniqueName: \"kubernetes.io/projected/22ff22ad-d733-467f-9c37-c7d3a64ded52-kube-api-access-dth6v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:08 crc kubenswrapper[4681]: I1123 07:04:08.971268 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.074495 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dth6v\" (UniqueName: \"kubernetes.io/projected/22ff22ad-d733-467f-9c37-c7d3a64ded52-kube-api-access-dth6v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.074818 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.075165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.079791 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.083423 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.089591 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dth6v\" (UniqueName: \"kubernetes.io/projected/22ff22ad-d733-467f-9c37-c7d3a64ded52-kube-api-access-dth6v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dv7p4\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.174161 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.699037 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4"] Nov 23 07:04:09 crc kubenswrapper[4681]: W1123 07:04:09.700613 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22ff22ad_d733_467f_9c37_c7d3a64ded52.slice/crio-ce84b3780dfb37c788033fb56d9215781a0dcd4e2cdbcbc2e4fa24f4916ba0ef WatchSource:0}: Error finding container ce84b3780dfb37c788033fb56d9215781a0dcd4e2cdbcbc2e4fa24f4916ba0ef: Status 404 returned error can't find the container with id ce84b3780dfb37c788033fb56d9215781a0dcd4e2cdbcbc2e4fa24f4916ba0ef Nov 23 07:04:09 crc kubenswrapper[4681]: I1123 07:04:09.787346 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" event={"ID":"22ff22ad-d733-467f-9c37-c7d3a64ded52","Type":"ContainerStarted","Data":"ce84b3780dfb37c788033fb56d9215781a0dcd4e2cdbcbc2e4fa24f4916ba0ef"} Nov 23 07:04:10 crc kubenswrapper[4681]: I1123 07:04:10.619610 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 07:04:10 crc kubenswrapper[4681]: I1123 07:04:10.797609 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" event={"ID":"22ff22ad-d733-467f-9c37-c7d3a64ded52","Type":"ContainerStarted","Data":"a699cc21e87bfdc8983c153fa8417df022b67c9c3b5ce5073f3a25ecd389b80f"} Nov 23 07:04:10 crc kubenswrapper[4681]: I1123 07:04:10.833640 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" podStartSLOduration=2.380866256 podStartE2EDuration="2.833620287s" podCreationTimestamp="2025-11-23 07:04:08 +0000 UTC" firstStartedPulling="2025-11-23 07:04:09.703177272 +0000 UTC m=+1186.772686509" lastFinishedPulling="2025-11-23 07:04:10.155931313 +0000 UTC m=+1187.225440540" observedRunningTime="2025-11-23 07:04:10.819090485 +0000 UTC m=+1187.888599722" watchObservedRunningTime="2025-11-23 07:04:10.833620287 +0000 UTC m=+1187.903129523" Nov 23 07:04:10 crc kubenswrapper[4681]: I1123 07:04:10.871719 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:13 crc kubenswrapper[4681]: I1123 07:04:13.823734 4681 generic.go:334] "Generic (PLEG): container finished" podID="22ff22ad-d733-467f-9c37-c7d3a64ded52" containerID="a699cc21e87bfdc8983c153fa8417df022b67c9c3b5ce5073f3a25ecd389b80f" exitCode=0 Nov 23 07:04:13 crc kubenswrapper[4681]: I1123 07:04:13.823818 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" event={"ID":"22ff22ad-d733-467f-9c37-c7d3a64ded52","Type":"ContainerDied","Data":"a699cc21e87bfdc8983c153fa8417df022b67c9c3b5ce5073f3a25ecd389b80f"} Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.308995 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.436476 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-ssh-key\") pod \"22ff22ad-d733-467f-9c37-c7d3a64ded52\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.436526 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dth6v\" (UniqueName: \"kubernetes.io/projected/22ff22ad-d733-467f-9c37-c7d3a64ded52-kube-api-access-dth6v\") pod \"22ff22ad-d733-467f-9c37-c7d3a64ded52\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.436673 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-inventory\") pod \"22ff22ad-d733-467f-9c37-c7d3a64ded52\" (UID: \"22ff22ad-d733-467f-9c37-c7d3a64ded52\") " Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.472607 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ff22ad-d733-467f-9c37-c7d3a64ded52-kube-api-access-dth6v" (OuterVolumeSpecName: "kube-api-access-dth6v") pod "22ff22ad-d733-467f-9c37-c7d3a64ded52" (UID: "22ff22ad-d733-467f-9c37-c7d3a64ded52"). InnerVolumeSpecName "kube-api-access-dth6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.482057 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "22ff22ad-d733-467f-9c37-c7d3a64ded52" (UID: "22ff22ad-d733-467f-9c37-c7d3a64ded52"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.484311 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-inventory" (OuterVolumeSpecName: "inventory") pod "22ff22ad-d733-467f-9c37-c7d3a64ded52" (UID: "22ff22ad-d733-467f-9c37-c7d3a64ded52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.540139 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.540178 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dth6v\" (UniqueName: \"kubernetes.io/projected/22ff22ad-d733-467f-9c37-c7d3a64ded52-kube-api-access-dth6v\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.540193 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22ff22ad-d733-467f-9c37-c7d3a64ded52-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.845762 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" event={"ID":"22ff22ad-d733-467f-9c37-c7d3a64ded52","Type":"ContainerDied","Data":"ce84b3780dfb37c788033fb56d9215781a0dcd4e2cdbcbc2e4fa24f4916ba0ef"} Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.845813 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce84b3780dfb37c788033fb56d9215781a0dcd4e2cdbcbc2e4fa24f4916ba0ef" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.845859 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dv7p4" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.924833 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8"] Nov 23 07:04:15 crc kubenswrapper[4681]: E1123 07:04:15.925230 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ff22ad-d733-467f-9c37-c7d3a64ded52" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.925247 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ff22ad-d733-467f-9c37-c7d3a64ded52" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.925438 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ff22ad-d733-467f-9c37-c7d3a64ded52" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.926062 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.930422 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.930642 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.930748 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.930890 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:04:15 crc kubenswrapper[4681]: I1123 07:04:15.933137 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8"] Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.055448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z772\" (UniqueName: \"kubernetes.io/projected/875ab6f6-da41-48a2-abc0-f3c890efc616-kube-api-access-6z772\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.055523 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.055743 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.055842 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.157797 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z772\" (UniqueName: \"kubernetes.io/projected/875ab6f6-da41-48a2-abc0-f3c890efc616-kube-api-access-6z772\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.157859 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.158785 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.159060 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.163259 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.163647 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.166737 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.178449 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z772\" (UniqueName: \"kubernetes.io/projected/875ab6f6-da41-48a2-abc0-f3c890efc616-kube-api-access-6z772\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.240479 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.759188 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8"] Nov 23 07:04:16 crc kubenswrapper[4681]: I1123 07:04:16.872779 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" event={"ID":"875ab6f6-da41-48a2-abc0-f3c890efc616","Type":"ContainerStarted","Data":"b37eb66307f7fc08239ff7c8c23436f501e9816ad6da25231341aa785dfe40f1"} Nov 23 07:04:17 crc kubenswrapper[4681]: I1123 07:04:17.884898 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" event={"ID":"875ab6f6-da41-48a2-abc0-f3c890efc616","Type":"ContainerStarted","Data":"b74caa86476a40e469fbecb4a8c2d4cb5d65e557bfa62f42c6ec5a340418ccd4"} Nov 23 07:04:17 crc kubenswrapper[4681]: I1123 07:04:17.903054 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" podStartSLOduration=2.34408519 podStartE2EDuration="2.903028827s" podCreationTimestamp="2025-11-23 07:04:15 +0000 UTC" firstStartedPulling="2025-11-23 07:04:16.761076865 +0000 UTC m=+1193.830586102" lastFinishedPulling="2025-11-23 07:04:17.320020502 +0000 UTC m=+1194.389529739" observedRunningTime="2025-11-23 07:04:17.901408224 +0000 UTC m=+1194.970917461" watchObservedRunningTime="2025-11-23 07:04:17.903028827 +0000 UTC m=+1194.972538065" Nov 23 07:04:43 crc kubenswrapper[4681]: I1123 07:04:43.933421 4681 scope.go:117] "RemoveContainer" containerID="412f5c1af48abab5ffe56a1d510d44d16887e4c0ee830d06c120bf8518a1e5b3" Nov 23 07:04:43 crc kubenswrapper[4681]: I1123 07:04:43.966580 4681 scope.go:117] "RemoveContainer" containerID="de548e50d76397ca3e8d6a0475aa03ce4c0604a2cfb9926b39faba6906bdeaa9" Nov 23 07:04:44 crc kubenswrapper[4681]: I1123 07:04:44.004274 4681 scope.go:117] "RemoveContainer" containerID="08a35ee79b05213f1416b333497c5a1ef5bfe516965b49957a48844fbac90304" Nov 23 07:04:44 crc kubenswrapper[4681]: I1123 07:04:44.064112 4681 scope.go:117] "RemoveContainer" containerID="61b3c1c0fa0f2c7f645dc11fdeef5e738073d33f57f29defa00517202c6e368b" Nov 23 07:04:44 crc kubenswrapper[4681]: I1123 07:04:44.095516 4681 scope.go:117] "RemoveContainer" containerID="81de7e7395ab8b3c753cb319772266a2f7aa9cd6d297a5e0aecfe387311d1ce2" Nov 23 07:04:44 crc kubenswrapper[4681]: I1123 07:04:44.126539 4681 scope.go:117] "RemoveContainer" containerID="26d05d10cbbc451df6804f6cc6bf5b505854f245655b61d41a993b45c5b09f20" Nov 23 07:05:42 crc kubenswrapper[4681]: I1123 07:05:42.296244 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:05:42 crc kubenswrapper[4681]: I1123 07:05:42.296873 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:05:44 crc kubenswrapper[4681]: I1123 07:05:44.294350 4681 scope.go:117] "RemoveContainer" containerID="8541794a357af6ef37dd29f8d7f2b1469bcb33969532de1eae2fdcae8d1fe8ef" Nov 23 07:05:44 crc kubenswrapper[4681]: I1123 07:05:44.327284 4681 scope.go:117] "RemoveContainer" containerID="5d3b9b18c19d40b4875a0795be196763470210354d7a0ac2916665447d7ced82" Nov 23 07:05:44 crc kubenswrapper[4681]: I1123 07:05:44.372418 4681 scope.go:117] "RemoveContainer" containerID="4eb819562924a436a5ab39f168eacb4ecf88a1bec5f4d4b4f5c623c6df3e83a5" Nov 23 07:06:12 crc kubenswrapper[4681]: I1123 07:06:12.295585 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:06:12 crc kubenswrapper[4681]: I1123 07:06:12.296368 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:06:42 crc kubenswrapper[4681]: I1123 07:06:42.295331 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:06:42 crc kubenswrapper[4681]: I1123 07:06:42.296670 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:06:42 crc kubenswrapper[4681]: I1123 07:06:42.296781 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:06:42 crc kubenswrapper[4681]: I1123 07:06:42.297992 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"411d710baa479cd25651d571408d129f643d8f5da14108264248611d2aa6b0dc"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:06:42 crc kubenswrapper[4681]: I1123 07:06:42.298059 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://411d710baa479cd25651d571408d129f643d8f5da14108264248611d2aa6b0dc" gracePeriod=600 Nov 23 07:06:43 crc kubenswrapper[4681]: I1123 07:06:43.406534 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="411d710baa479cd25651d571408d129f643d8f5da14108264248611d2aa6b0dc" exitCode=0 Nov 23 07:06:43 crc kubenswrapper[4681]: I1123 07:06:43.406622 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"411d710baa479cd25651d571408d129f643d8f5da14108264248611d2aa6b0dc"} Nov 23 07:06:43 crc kubenswrapper[4681]: I1123 07:06:43.407337 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c"} Nov 23 07:06:43 crc kubenswrapper[4681]: I1123 07:06:43.407376 4681 scope.go:117] "RemoveContainer" containerID="6b6565a11ae3d1b82169df41e725361a82bf48f3c4a16c6cf3c1e136bf571ba8" Nov 23 07:06:44 crc kubenswrapper[4681]: I1123 07:06:44.464049 4681 scope.go:117] "RemoveContainer" containerID="827710c0adbf80e6ed797d938ea567b18186cf021fa5ad71b99e4bbbe741cd60" Nov 23 07:06:44 crc kubenswrapper[4681]: I1123 07:06:44.487746 4681 scope.go:117] "RemoveContainer" containerID="380b6d44c65b94cb8300f25cdc2b9d551d69e23126a4faa107f6a923df8c4287" Nov 23 07:07:06 crc kubenswrapper[4681]: I1123 07:07:06.670613 4681 generic.go:334] "Generic (PLEG): container finished" podID="875ab6f6-da41-48a2-abc0-f3c890efc616" containerID="b74caa86476a40e469fbecb4a8c2d4cb5d65e557bfa62f42c6ec5a340418ccd4" exitCode=0 Nov 23 07:07:06 crc kubenswrapper[4681]: I1123 07:07:06.670678 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" event={"ID":"875ab6f6-da41-48a2-abc0-f3c890efc616","Type":"ContainerDied","Data":"b74caa86476a40e469fbecb4a8c2d4cb5d65e557bfa62f42c6ec5a340418ccd4"} Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.007767 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.110407 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z772\" (UniqueName: \"kubernetes.io/projected/875ab6f6-da41-48a2-abc0-f3c890efc616-kube-api-access-6z772\") pod \"875ab6f6-da41-48a2-abc0-f3c890efc616\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.110500 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-bootstrap-combined-ca-bundle\") pod \"875ab6f6-da41-48a2-abc0-f3c890efc616\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.110528 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-inventory\") pod \"875ab6f6-da41-48a2-abc0-f3c890efc616\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.110734 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-ssh-key\") pod \"875ab6f6-da41-48a2-abc0-f3c890efc616\" (UID: \"875ab6f6-da41-48a2-abc0-f3c890efc616\") " Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.117321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "875ab6f6-da41-48a2-abc0-f3c890efc616" (UID: "875ab6f6-da41-48a2-abc0-f3c890efc616"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.118602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875ab6f6-da41-48a2-abc0-f3c890efc616-kube-api-access-6z772" (OuterVolumeSpecName: "kube-api-access-6z772") pod "875ab6f6-da41-48a2-abc0-f3c890efc616" (UID: "875ab6f6-da41-48a2-abc0-f3c890efc616"). InnerVolumeSpecName "kube-api-access-6z772". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.136441 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "875ab6f6-da41-48a2-abc0-f3c890efc616" (UID: "875ab6f6-da41-48a2-abc0-f3c890efc616"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.141489 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-inventory" (OuterVolumeSpecName: "inventory") pod "875ab6f6-da41-48a2-abc0-f3c890efc616" (UID: "875ab6f6-da41-48a2-abc0-f3c890efc616"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.214257 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.214354 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z772\" (UniqueName: \"kubernetes.io/projected/875ab6f6-da41-48a2-abc0-f3c890efc616-kube-api-access-6z772\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.214426 4681 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.214500 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875ab6f6-da41-48a2-abc0-f3c890efc616-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.689228 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" event={"ID":"875ab6f6-da41-48a2-abc0-f3c890efc616","Type":"ContainerDied","Data":"b37eb66307f7fc08239ff7c8c23436f501e9816ad6da25231341aa785dfe40f1"} Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.689287 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37eb66307f7fc08239ff7c8c23436f501e9816ad6da25231341aa785dfe40f1" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.689536 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gn8p8" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.778271 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn"] Nov 23 07:07:08 crc kubenswrapper[4681]: E1123 07:07:08.778927 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875ab6f6-da41-48a2-abc0-f3c890efc616" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.778950 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="875ab6f6-da41-48a2-abc0-f3c890efc616" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.779232 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="875ab6f6-da41-48a2-abc0-f3c890efc616" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.780049 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.782316 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.782837 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.783191 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.783447 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.788034 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn"] Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.824718 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.824789 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.824990 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md626\" (UniqueName: \"kubernetes.io/projected/477f2017-bbac-4d93-8be6-703fc200c9ed-kube-api-access-md626\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.926522 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.926599 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.926770 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md626\" (UniqueName: \"kubernetes.io/projected/477f2017-bbac-4d93-8be6-703fc200c9ed-kube-api-access-md626\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.932185 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.932249 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:08 crc kubenswrapper[4681]: I1123 07:07:08.943949 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md626\" (UniqueName: \"kubernetes.io/projected/477f2017-bbac-4d93-8be6-703fc200c9ed-kube-api-access-md626\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7krxn\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:09 crc kubenswrapper[4681]: I1123 07:07:09.105700 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:07:09 crc kubenswrapper[4681]: I1123 07:07:09.568056 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn"] Nov 23 07:07:09 crc kubenswrapper[4681]: I1123 07:07:09.698113 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" event={"ID":"477f2017-bbac-4d93-8be6-703fc200c9ed","Type":"ContainerStarted","Data":"c64fb9478eeed108078b2c47d36d452b9ca5ea4b71e619665fe4018b732938ab"} Nov 23 07:07:10 crc kubenswrapper[4681]: I1123 07:07:10.707242 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" event={"ID":"477f2017-bbac-4d93-8be6-703fc200c9ed","Type":"ContainerStarted","Data":"2845743b21f0c5a97c21c4030b31b71ec2eebbf5de2964d194ce3c50ff1c912a"} Nov 23 07:08:00 crc kubenswrapper[4681]: I1123 07:08:00.041911 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" podStartSLOduration=51.567364516 podStartE2EDuration="52.041886525s" podCreationTimestamp="2025-11-23 07:07:08 +0000 UTC" firstStartedPulling="2025-11-23 07:07:09.571786718 +0000 UTC m=+1366.641295955" lastFinishedPulling="2025-11-23 07:07:10.046308726 +0000 UTC m=+1367.115817964" observedRunningTime="2025-11-23 07:07:10.72583373 +0000 UTC m=+1367.795342967" watchObservedRunningTime="2025-11-23 07:08:00.041886525 +0000 UTC m=+1417.111395762" Nov 23 07:08:00 crc kubenswrapper[4681]: I1123 07:08:00.045376 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c109-account-create-wnfks"] Nov 23 07:08:00 crc kubenswrapper[4681]: I1123 07:08:00.055062 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gbcw6"] Nov 23 07:08:00 crc kubenswrapper[4681]: I1123 07:08:00.060887 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c109-account-create-wnfks"] Nov 23 07:08:00 crc kubenswrapper[4681]: I1123 07:08:00.067279 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gbcw6"] Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.027629 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5c8nq"] Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.034851 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d5d6-account-create-sb6mm"] Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.039962 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5c8nq"] Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.044971 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d5d6-account-create-sb6mm"] Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.262510 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da07e75-f9da-4b3e-8941-3aac63809525" path="/var/lib/kubelet/pods/0da07e75-f9da-4b3e-8941-3aac63809525/volumes" Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.265233 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c5884e-2bbd-45f5-9363-6f504638a689" path="/var/lib/kubelet/pods/14c5884e-2bbd-45f5-9363-6f504638a689/volumes" Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.267298 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34af6366-6fcc-451b-b6fc-72eb0af39eb1" path="/var/lib/kubelet/pods/34af6366-6fcc-451b-b6fc-72eb0af39eb1/volumes" Nov 23 07:08:01 crc kubenswrapper[4681]: I1123 07:08:01.268752 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763112b5-b200-4987-8b3a-e9b9fa181621" path="/var/lib/kubelet/pods/763112b5-b200-4987-8b3a-e9b9fa181621/volumes" Nov 23 07:08:06 crc kubenswrapper[4681]: I1123 07:08:06.039332 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6jtms"] Nov 23 07:08:06 crc kubenswrapper[4681]: I1123 07:08:06.066347 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ed2a-account-create-pnwh7"] Nov 23 07:08:06 crc kubenswrapper[4681]: I1123 07:08:06.072682 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ed2a-account-create-pnwh7"] Nov 23 07:08:06 crc kubenswrapper[4681]: I1123 07:08:06.078130 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6jtms"] Nov 23 07:08:07 crc kubenswrapper[4681]: I1123 07:08:07.261086 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0759e7b7-255b-4d6a-9822-52d3967752a4" path="/var/lib/kubelet/pods/0759e7b7-255b-4d6a-9822-52d3967752a4/volumes" Nov 23 07:08:07 crc kubenswrapper[4681]: I1123 07:08:07.263313 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedf9828-c3be-4342-ab30-d743456c894c" path="/var/lib/kubelet/pods/fedf9828-c3be-4342-ab30-d743456c894c/volumes" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.521430 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q84qr"] Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.524763 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.534833 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q84qr"] Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.656686 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-catalog-content\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.656752 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtsb\" (UniqueName: \"kubernetes.io/projected/5734f7c7-ea92-4f75-83f2-1f90d10539c0-kube-api-access-qxtsb\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.656860 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-utilities\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.758997 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-catalog-content\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.759557 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtsb\" (UniqueName: \"kubernetes.io/projected/5734f7c7-ea92-4f75-83f2-1f90d10539c0-kube-api-access-qxtsb\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.759476 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-catalog-content\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.759812 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-utilities\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.760169 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-utilities\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.790618 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtsb\" (UniqueName: \"kubernetes.io/projected/5734f7c7-ea92-4f75-83f2-1f90d10539c0-kube-api-access-qxtsb\") pod \"certified-operators-q84qr\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:11 crc kubenswrapper[4681]: I1123 07:08:11.849129 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:12 crc kubenswrapper[4681]: I1123 07:08:12.263593 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q84qr"] Nov 23 07:08:13 crc kubenswrapper[4681]: I1123 07:08:13.224611 4681 generic.go:334] "Generic (PLEG): container finished" podID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerID="007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1" exitCode=0 Nov 23 07:08:13 crc kubenswrapper[4681]: I1123 07:08:13.224668 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q84qr" event={"ID":"5734f7c7-ea92-4f75-83f2-1f90d10539c0","Type":"ContainerDied","Data":"007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1"} Nov 23 07:08:13 crc kubenswrapper[4681]: I1123 07:08:13.225890 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q84qr" event={"ID":"5734f7c7-ea92-4f75-83f2-1f90d10539c0","Type":"ContainerStarted","Data":"af0d2122835168e07162d444baff56d565a25474edada9dd44fdd34704a4a657"} Nov 23 07:08:14 crc kubenswrapper[4681]: I1123 07:08:14.240091 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q84qr" event={"ID":"5734f7c7-ea92-4f75-83f2-1f90d10539c0","Type":"ContainerStarted","Data":"fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44"} Nov 23 07:08:15 crc kubenswrapper[4681]: I1123 07:08:15.254278 4681 generic.go:334] "Generic (PLEG): container finished" podID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerID="fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44" exitCode=0 Nov 23 07:08:15 crc kubenswrapper[4681]: I1123 07:08:15.262408 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q84qr" event={"ID":"5734f7c7-ea92-4f75-83f2-1f90d10539c0","Type":"ContainerDied","Data":"fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44"} Nov 23 07:08:16 crc kubenswrapper[4681]: I1123 07:08:16.265351 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q84qr" event={"ID":"5734f7c7-ea92-4f75-83f2-1f90d10539c0","Type":"ContainerStarted","Data":"674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48"} Nov 23 07:08:16 crc kubenswrapper[4681]: I1123 07:08:16.291979 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q84qr" podStartSLOduration=2.702773588 podStartE2EDuration="5.291956787s" podCreationTimestamp="2025-11-23 07:08:11 +0000 UTC" firstStartedPulling="2025-11-23 07:08:13.227114381 +0000 UTC m=+1430.296623619" lastFinishedPulling="2025-11-23 07:08:15.816297581 +0000 UTC m=+1432.885806818" observedRunningTime="2025-11-23 07:08:16.281931981 +0000 UTC m=+1433.351441218" watchObservedRunningTime="2025-11-23 07:08:16.291956787 +0000 UTC m=+1433.361466024" Nov 23 07:08:18 crc kubenswrapper[4681]: I1123 07:08:18.908968 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrb6"] Nov 23 07:08:18 crc kubenswrapper[4681]: I1123 07:08:18.912075 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:18 crc kubenswrapper[4681]: I1123 07:08:18.927634 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxqs\" (UniqueName: \"kubernetes.io/projected/578cfdca-6882-450a-9db4-2b2a31b13614-kube-api-access-7gxqs\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:18 crc kubenswrapper[4681]: I1123 07:08:18.927890 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-catalog-content\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:18 crc kubenswrapper[4681]: I1123 07:08:18.928005 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-utilities\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:18 crc kubenswrapper[4681]: I1123 07:08:18.982861 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrb6"] Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.029317 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-utilities\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.029370 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gxqs\" (UniqueName: \"kubernetes.io/projected/578cfdca-6882-450a-9db4-2b2a31b13614-kube-api-access-7gxqs\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.029625 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-catalog-content\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.029863 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-utilities\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.030072 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-catalog-content\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.048901 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gxqs\" (UniqueName: \"kubernetes.io/projected/578cfdca-6882-450a-9db4-2b2a31b13614-kube-api-access-7gxqs\") pod \"redhat-marketplace-gkrb6\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.238128 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:19 crc kubenswrapper[4681]: I1123 07:08:19.705390 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrb6"] Nov 23 07:08:19 crc kubenswrapper[4681]: W1123 07:08:19.716109 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod578cfdca_6882_450a_9db4_2b2a31b13614.slice/crio-569c8a9ad80a7f0b4002a9330b4e9874fcf6a840d21c2719d05f5099b8d83e33 WatchSource:0}: Error finding container 569c8a9ad80a7f0b4002a9330b4e9874fcf6a840d21c2719d05f5099b8d83e33: Status 404 returned error can't find the container with id 569c8a9ad80a7f0b4002a9330b4e9874fcf6a840d21c2719d05f5099b8d83e33 Nov 23 07:08:20 crc kubenswrapper[4681]: I1123 07:08:20.308811 4681 generic.go:334] "Generic (PLEG): container finished" podID="578cfdca-6882-450a-9db4-2b2a31b13614" containerID="2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f" exitCode=0 Nov 23 07:08:20 crc kubenswrapper[4681]: I1123 07:08:20.308905 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrb6" event={"ID":"578cfdca-6882-450a-9db4-2b2a31b13614","Type":"ContainerDied","Data":"2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f"} Nov 23 07:08:20 crc kubenswrapper[4681]: I1123 07:08:20.308963 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrb6" event={"ID":"578cfdca-6882-450a-9db4-2b2a31b13614","Type":"ContainerStarted","Data":"569c8a9ad80a7f0b4002a9330b4e9874fcf6a840d21c2719d05f5099b8d83e33"} Nov 23 07:08:21 crc kubenswrapper[4681]: I1123 07:08:21.319210 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrb6" event={"ID":"578cfdca-6882-450a-9db4-2b2a31b13614","Type":"ContainerStarted","Data":"7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670"} Nov 23 07:08:21 crc kubenswrapper[4681]: I1123 07:08:21.850229 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:21 crc kubenswrapper[4681]: I1123 07:08:21.850296 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:21 crc kubenswrapper[4681]: I1123 07:08:21.889336 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:22 crc kubenswrapper[4681]: I1123 07:08:22.329380 4681 generic.go:334] "Generic (PLEG): container finished" podID="578cfdca-6882-450a-9db4-2b2a31b13614" containerID="7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670" exitCode=0 Nov 23 07:08:22 crc kubenswrapper[4681]: I1123 07:08:22.329447 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrb6" event={"ID":"578cfdca-6882-450a-9db4-2b2a31b13614","Type":"ContainerDied","Data":"7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670"} Nov 23 07:08:22 crc kubenswrapper[4681]: I1123 07:08:22.367824 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:23 crc kubenswrapper[4681]: I1123 07:08:23.341486 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrb6" event={"ID":"578cfdca-6882-450a-9db4-2b2a31b13614","Type":"ContainerStarted","Data":"09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63"} Nov 23 07:08:23 crc kubenswrapper[4681]: I1123 07:08:23.360102 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gkrb6" podStartSLOduration=2.874135518 podStartE2EDuration="5.360086495s" podCreationTimestamp="2025-11-23 07:08:18 +0000 UTC" firstStartedPulling="2025-11-23 07:08:20.312085102 +0000 UTC m=+1437.381594339" lastFinishedPulling="2025-11-23 07:08:22.798036079 +0000 UTC m=+1439.867545316" observedRunningTime="2025-11-23 07:08:23.354235136 +0000 UTC m=+1440.423744373" watchObservedRunningTime="2025-11-23 07:08:23.360086495 +0000 UTC m=+1440.429595732" Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.296878 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q84qr"] Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.350213 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q84qr" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="registry-server" containerID="cri-o://674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48" gracePeriod=2 Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.766851 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.854428 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-utilities\") pod \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.855096 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-catalog-content\") pod \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.856314 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-utilities" (OuterVolumeSpecName: "utilities") pod "5734f7c7-ea92-4f75-83f2-1f90d10539c0" (UID: "5734f7c7-ea92-4f75-83f2-1f90d10539c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.900239 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5734f7c7-ea92-4f75-83f2-1f90d10539c0" (UID: "5734f7c7-ea92-4f75-83f2-1f90d10539c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.956323 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxtsb\" (UniqueName: \"kubernetes.io/projected/5734f7c7-ea92-4f75-83f2-1f90d10539c0-kube-api-access-qxtsb\") pod \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\" (UID: \"5734f7c7-ea92-4f75-83f2-1f90d10539c0\") " Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.956759 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.956782 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5734f7c7-ea92-4f75-83f2-1f90d10539c0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:24 crc kubenswrapper[4681]: I1123 07:08:24.976724 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5734f7c7-ea92-4f75-83f2-1f90d10539c0-kube-api-access-qxtsb" (OuterVolumeSpecName: "kube-api-access-qxtsb") pod "5734f7c7-ea92-4f75-83f2-1f90d10539c0" (UID: "5734f7c7-ea92-4f75-83f2-1f90d10539c0"). InnerVolumeSpecName "kube-api-access-qxtsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.058800 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxtsb\" (UniqueName: \"kubernetes.io/projected/5734f7c7-ea92-4f75-83f2-1f90d10539c0-kube-api-access-qxtsb\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.359522 4681 generic.go:334] "Generic (PLEG): container finished" podID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerID="674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48" exitCode=0 Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.359566 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q84qr" event={"ID":"5734f7c7-ea92-4f75-83f2-1f90d10539c0","Type":"ContainerDied","Data":"674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48"} Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.359594 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q84qr" event={"ID":"5734f7c7-ea92-4f75-83f2-1f90d10539c0","Type":"ContainerDied","Data":"af0d2122835168e07162d444baff56d565a25474edada9dd44fdd34704a4a657"} Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.359611 4681 scope.go:117] "RemoveContainer" containerID="674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.359726 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q84qr" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.380281 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q84qr"] Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.382972 4681 scope.go:117] "RemoveContainer" containerID="fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.388834 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q84qr"] Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.403191 4681 scope.go:117] "RemoveContainer" containerID="007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.434382 4681 scope.go:117] "RemoveContainer" containerID="674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48" Nov 23 07:08:25 crc kubenswrapper[4681]: E1123 07:08:25.434739 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48\": container with ID starting with 674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48 not found: ID does not exist" containerID="674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.434773 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48"} err="failed to get container status \"674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48\": rpc error: code = NotFound desc = could not find container \"674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48\": container with ID starting with 674220f6b90fa7938aaf846b69b1d86294c2874052cdc12f0e358702db4dbb48 not found: ID does not exist" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.434797 4681 scope.go:117] "RemoveContainer" containerID="fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44" Nov 23 07:08:25 crc kubenswrapper[4681]: E1123 07:08:25.435093 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44\": container with ID starting with fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44 not found: ID does not exist" containerID="fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.435115 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44"} err="failed to get container status \"fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44\": rpc error: code = NotFound desc = could not find container \"fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44\": container with ID starting with fc5882213b9e563660b647c00f5d17af6fa188674b21e5f546df7f6678f37b44 not found: ID does not exist" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.435129 4681 scope.go:117] "RemoveContainer" containerID="007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1" Nov 23 07:08:25 crc kubenswrapper[4681]: E1123 07:08:25.435433 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1\": container with ID starting with 007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1 not found: ID does not exist" containerID="007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1" Nov 23 07:08:25 crc kubenswrapper[4681]: I1123 07:08:25.435486 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1"} err="failed to get container status \"007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1\": rpc error: code = NotFound desc = could not find container \"007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1\": container with ID starting with 007df05c091f3f808c05bc9d7bfdafd2e20ca954e24ab6185499e97a0c3061a1 not found: ID does not exist" Nov 23 07:08:27 crc kubenswrapper[4681]: I1123 07:08:27.261099 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" path="/var/lib/kubelet/pods/5734f7c7-ea92-4f75-83f2-1f90d10539c0/volumes" Nov 23 07:08:29 crc kubenswrapper[4681]: I1123 07:08:29.238753 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:29 crc kubenswrapper[4681]: I1123 07:08:29.238800 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:29 crc kubenswrapper[4681]: I1123 07:08:29.281350 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:29 crc kubenswrapper[4681]: I1123 07:08:29.423558 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:29 crc kubenswrapper[4681]: I1123 07:08:29.519365 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrb6"] Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.403546 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gkrb6" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="registry-server" containerID="cri-o://09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63" gracePeriod=2 Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.780831 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.798422 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-catalog-content\") pod \"578cfdca-6882-450a-9db4-2b2a31b13614\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.798493 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-utilities\") pod \"578cfdca-6882-450a-9db4-2b2a31b13614\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.798650 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gxqs\" (UniqueName: \"kubernetes.io/projected/578cfdca-6882-450a-9db4-2b2a31b13614-kube-api-access-7gxqs\") pod \"578cfdca-6882-450a-9db4-2b2a31b13614\" (UID: \"578cfdca-6882-450a-9db4-2b2a31b13614\") " Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.799143 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-utilities" (OuterVolumeSpecName: "utilities") pod "578cfdca-6882-450a-9db4-2b2a31b13614" (UID: "578cfdca-6882-450a-9db4-2b2a31b13614"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.804503 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578cfdca-6882-450a-9db4-2b2a31b13614-kube-api-access-7gxqs" (OuterVolumeSpecName: "kube-api-access-7gxqs") pod "578cfdca-6882-450a-9db4-2b2a31b13614" (UID: "578cfdca-6882-450a-9db4-2b2a31b13614"). InnerVolumeSpecName "kube-api-access-7gxqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.814551 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "578cfdca-6882-450a-9db4-2b2a31b13614" (UID: "578cfdca-6882-450a-9db4-2b2a31b13614"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.901591 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.901623 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578cfdca-6882-450a-9db4-2b2a31b13614-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:31 crc kubenswrapper[4681]: I1123 07:08:31.901633 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gxqs\" (UniqueName: \"kubernetes.io/projected/578cfdca-6882-450a-9db4-2b2a31b13614-kube-api-access-7gxqs\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.417731 4681 generic.go:334] "Generic (PLEG): container finished" podID="578cfdca-6882-450a-9db4-2b2a31b13614" containerID="09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63" exitCode=0 Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.417772 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrb6" event={"ID":"578cfdca-6882-450a-9db4-2b2a31b13614","Type":"ContainerDied","Data":"09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63"} Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.417801 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrb6" event={"ID":"578cfdca-6882-450a-9db4-2b2a31b13614","Type":"ContainerDied","Data":"569c8a9ad80a7f0b4002a9330b4e9874fcf6a840d21c2719d05f5099b8d83e33"} Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.417817 4681 scope.go:117] "RemoveContainer" containerID="09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.417945 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrb6" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.458515 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrb6"] Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.463597 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrb6"] Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.495138 4681 scope.go:117] "RemoveContainer" containerID="7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.524766 4681 scope.go:117] "RemoveContainer" containerID="2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.557655 4681 scope.go:117] "RemoveContainer" containerID="09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63" Nov 23 07:08:32 crc kubenswrapper[4681]: E1123 07:08:32.557975 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63\": container with ID starting with 09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63 not found: ID does not exist" containerID="09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.558013 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63"} err="failed to get container status \"09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63\": rpc error: code = NotFound desc = could not find container \"09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63\": container with ID starting with 09e3c641a3154a68f4eaa1e86b50e86e7bff4401693c912534ab72c8700dbb63 not found: ID does not exist" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.558039 4681 scope.go:117] "RemoveContainer" containerID="7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670" Nov 23 07:08:32 crc kubenswrapper[4681]: E1123 07:08:32.558364 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670\": container with ID starting with 7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670 not found: ID does not exist" containerID="7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.558399 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670"} err="failed to get container status \"7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670\": rpc error: code = NotFound desc = could not find container \"7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670\": container with ID starting with 7038b0e93ced472ad42eb7d2747988d22ddefb3f10d9241f8cb3755fb3482670 not found: ID does not exist" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.558420 4681 scope.go:117] "RemoveContainer" containerID="2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f" Nov 23 07:08:32 crc kubenswrapper[4681]: E1123 07:08:32.558770 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f\": container with ID starting with 2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f not found: ID does not exist" containerID="2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f" Nov 23 07:08:32 crc kubenswrapper[4681]: I1123 07:08:32.558825 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f"} err="failed to get container status \"2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f\": rpc error: code = NotFound desc = could not find container \"2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f\": container with ID starting with 2254919e8e92a64510baebe53f4d5210bbeb1ac5f37067490556467d44579d1f not found: ID does not exist" Nov 23 07:08:33 crc kubenswrapper[4681]: I1123 07:08:33.260945 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" path="/var/lib/kubelet/pods/578cfdca-6882-450a-9db4-2b2a31b13614/volumes" Nov 23 07:08:37 crc kubenswrapper[4681]: I1123 07:08:37.037352 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-glbxl"] Nov 23 07:08:37 crc kubenswrapper[4681]: I1123 07:08:37.044601 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-glbxl"] Nov 23 07:08:37 crc kubenswrapper[4681]: I1123 07:08:37.259903 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d0f758-c36c-459d-90ac-326fbf9faa1c" path="/var/lib/kubelet/pods/60d0f758-c36c-459d-90ac-326fbf9faa1c/volumes" Nov 23 07:08:39 crc kubenswrapper[4681]: I1123 07:08:39.021885 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4wvpc"] Nov 23 07:08:39 crc kubenswrapper[4681]: I1123 07:08:39.029110 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-q2zbz"] Nov 23 07:08:39 crc kubenswrapper[4681]: I1123 07:08:39.036610 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4wvpc"] Nov 23 07:08:39 crc kubenswrapper[4681]: I1123 07:08:39.041841 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-q2zbz"] Nov 23 07:08:39 crc kubenswrapper[4681]: I1123 07:08:39.261752 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c8c95d-15d6-4b0c-bed1-b49e147f5af9" path="/var/lib/kubelet/pods/b6c8c95d-15d6-4b0c-bed1-b49e147f5af9/volumes" Nov 23 07:08:39 crc kubenswrapper[4681]: I1123 07:08:39.263711 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d039d81e-cd53-46e4-af64-12e2662c78ba" path="/var/lib/kubelet/pods/d039d81e-cd53-46e4-af64-12e2662c78ba/volumes" Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.052494 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-4xttv"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.091177 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-773a-account-create-4dks9"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.137762 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-4xttv"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.163635 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a3aa-account-create-bwsrj"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.181504 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-773a-account-create-4dks9"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.181557 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a3aa-account-create-bwsrj"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.200496 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-afdb-account-create-p4xln"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.218492 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-340e-account-create-5vbwn"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.218531 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-afdb-account-create-p4xln"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.233493 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-340e-account-create-5vbwn"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.234839 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-j55vj"] Nov 23 07:08:40 crc kubenswrapper[4681]: I1123 07:08:40.239578 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-j55vj"] Nov 23 07:08:41 crc kubenswrapper[4681]: I1123 07:08:41.260782 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6234ce9d-9669-4dbf-957d-7bfd7158639b" path="/var/lib/kubelet/pods/6234ce9d-9669-4dbf-957d-7bfd7158639b/volumes" Nov 23 07:08:41 crc kubenswrapper[4681]: I1123 07:08:41.262806 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6301e8d9-766f-447e-a721-6fd63dabc5e2" path="/var/lib/kubelet/pods/6301e8d9-766f-447e-a721-6fd63dabc5e2/volumes" Nov 23 07:08:41 crc kubenswrapper[4681]: I1123 07:08:41.264271 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffee548-f423-43f1-955e-4017e65eb1b4" path="/var/lib/kubelet/pods/8ffee548-f423-43f1-955e-4017e65eb1b4/volumes" Nov 23 07:08:41 crc kubenswrapper[4681]: I1123 07:08:41.265478 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2cbf1de-321e-4495-b9ab-b2e4c9758321" path="/var/lib/kubelet/pods/a2cbf1de-321e-4495-b9ab-b2e4c9758321/volumes" Nov 23 07:08:41 crc kubenswrapper[4681]: I1123 07:08:41.267412 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d9ea06-43fe-41a8-b588-178c01182a70" path="/var/lib/kubelet/pods/d1d9ea06-43fe-41a8-b588-178c01182a70/volumes" Nov 23 07:08:41 crc kubenswrapper[4681]: I1123 07:08:41.268401 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da18abb7-6233-4690-acad-41137f3ba686" path="/var/lib/kubelet/pods/da18abb7-6233-4690-acad-41137f3ba686/volumes" Nov 23 07:08:42 crc kubenswrapper[4681]: I1123 07:08:42.296127 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:08:42 crc kubenswrapper[4681]: I1123 07:08:42.296180 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.574566 4681 scope.go:117] "RemoveContainer" containerID="4c03e16e1b1a18b641ddf0a6727c0267f55102912d0b40ae79845c6ae410eda0" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.618833 4681 scope.go:117] "RemoveContainer" containerID="0a7c66b2e81c222f4494a82122fc66d89d15d0355e29d4a7200380d7fb3dee10" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.643234 4681 scope.go:117] "RemoveContainer" containerID="659514459c9af17082ca87133002fc6715c96fea9e7bcb8777dc16582edc712c" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.681666 4681 scope.go:117] "RemoveContainer" containerID="c62a5c5225b917d346e522ff70487041f2519ef01081b144ee1c48c7162a32e2" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.724435 4681 scope.go:117] "RemoveContainer" containerID="08b4e9d8a59d86e7879502f0579478e31e10fd63b9d8c7f0526c5c3feeeb58fc" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.763329 4681 scope.go:117] "RemoveContainer" containerID="5d11b103752e85d88efb4caba8ad8e32a71b412b2aff6ce6ea8e9d9bab58a550" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.803149 4681 scope.go:117] "RemoveContainer" containerID="3533c253c9fb422c916f155873a2cb78d220a6f9dc2713c45c6c67d32bde19f0" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.823301 4681 scope.go:117] "RemoveContainer" containerID="c3ed2852f4355225316daa379e72c1fff794894985c89ce543fd5d11f7fec8a4" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.854666 4681 scope.go:117] "RemoveContainer" containerID="07735bb8a267dd15321e42a2d39df29fb6e4ff1e884ea5f990453ba418c95609" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.878979 4681 scope.go:117] "RemoveContainer" containerID="5e6cc19701d2deefc40d5f97142a965eb33c5fbedd505213e104c8964b30492a" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.905620 4681 scope.go:117] "RemoveContainer" containerID="b3f3890034db190f4eaf8b2995c00966dc2ee9134c97a8a679fd748324fa7b28" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.921971 4681 scope.go:117] "RemoveContainer" containerID="72bb3843124270b5be3e8addc8c2749529cdc6f137a1a2c0fc6a40c23e8688e9" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.940733 4681 scope.go:117] "RemoveContainer" containerID="87639a698335f091f4d7951e41ac561500d62b8e18ce8bea7045caefbbbfa662" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.956080 4681 scope.go:117] "RemoveContainer" containerID="c61d31dd6eb69493d91d41d6f74ae19b9dbeed22f7778fba0bfaa161e7de26a9" Nov 23 07:08:44 crc kubenswrapper[4681]: I1123 07:08:44.976149 4681 scope.go:117] "RemoveContainer" containerID="81ae5e7ab8b2ac99496733254b981c9cddfe97b64c568072e24b2715fe5f4753" Nov 23 07:08:48 crc kubenswrapper[4681]: I1123 07:08:48.035082 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4hbr9"] Nov 23 07:08:48 crc kubenswrapper[4681]: I1123 07:08:48.043621 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4hbr9"] Nov 23 07:08:49 crc kubenswrapper[4681]: I1123 07:08:49.259855 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdb5691-8434-40bd-9103-1ebee6a25d76" path="/var/lib/kubelet/pods/7cdb5691-8434-40bd-9103-1ebee6a25d76/volumes" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.205162 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7lxlj"] Nov 23 07:09:01 crc kubenswrapper[4681]: E1123 07:09:01.206661 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="registry-server" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.206679 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="registry-server" Nov 23 07:09:01 crc kubenswrapper[4681]: E1123 07:09:01.206705 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="extract-utilities" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.206712 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="extract-utilities" Nov 23 07:09:01 crc kubenswrapper[4681]: E1123 07:09:01.206724 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="extract-utilities" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.206730 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="extract-utilities" Nov 23 07:09:01 crc kubenswrapper[4681]: E1123 07:09:01.206756 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="extract-content" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.206762 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="extract-content" Nov 23 07:09:01 crc kubenswrapper[4681]: E1123 07:09:01.206773 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="registry-server" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.206779 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="registry-server" Nov 23 07:09:01 crc kubenswrapper[4681]: E1123 07:09:01.206802 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="extract-content" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.206807 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="extract-content" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.207044 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5734f7c7-ea92-4f75-83f2-1f90d10539c0" containerName="registry-server" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.207068 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="578cfdca-6882-450a-9db4-2b2a31b13614" containerName="registry-server" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.208635 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.222119 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7lxlj"] Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.354327 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24lq\" (UniqueName: \"kubernetes.io/projected/d1952f1b-33fa-4236-84be-4f840692c716-kube-api-access-n24lq\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.354420 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-catalog-content\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.355148 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-utilities\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.456522 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-utilities\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.456629 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24lq\" (UniqueName: \"kubernetes.io/projected/d1952f1b-33fa-4236-84be-4f840692c716-kube-api-access-n24lq\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.456705 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-catalog-content\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.456945 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-utilities\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.457005 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-catalog-content\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.474094 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24lq\" (UniqueName: \"kubernetes.io/projected/d1952f1b-33fa-4236-84be-4f840692c716-kube-api-access-n24lq\") pod \"community-operators-7lxlj\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.529498 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:01 crc kubenswrapper[4681]: I1123 07:09:01.950441 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7lxlj"] Nov 23 07:09:02 crc kubenswrapper[4681]: I1123 07:09:02.658323 4681 generic.go:334] "Generic (PLEG): container finished" podID="d1952f1b-33fa-4236-84be-4f840692c716" containerID="480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6" exitCode=0 Nov 23 07:09:02 crc kubenswrapper[4681]: I1123 07:09:02.658659 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxlj" event={"ID":"d1952f1b-33fa-4236-84be-4f840692c716","Type":"ContainerDied","Data":"480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6"} Nov 23 07:09:02 crc kubenswrapper[4681]: I1123 07:09:02.658719 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxlj" event={"ID":"d1952f1b-33fa-4236-84be-4f840692c716","Type":"ContainerStarted","Data":"d946ffdd3ba11e8ad67a37824c0a31eb5d54a45e4884a4ef0a931a795f00cd58"} Nov 23 07:09:02 crc kubenswrapper[4681]: I1123 07:09:02.660219 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:09:03 crc kubenswrapper[4681]: I1123 07:09:03.668570 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxlj" event={"ID":"d1952f1b-33fa-4236-84be-4f840692c716","Type":"ContainerStarted","Data":"ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe"} Nov 23 07:09:04 crc kubenswrapper[4681]: I1123 07:09:04.679241 4681 generic.go:334] "Generic (PLEG): container finished" podID="d1952f1b-33fa-4236-84be-4f840692c716" containerID="ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe" exitCode=0 Nov 23 07:09:04 crc kubenswrapper[4681]: I1123 07:09:04.679290 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxlj" event={"ID":"d1952f1b-33fa-4236-84be-4f840692c716","Type":"ContainerDied","Data":"ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe"} Nov 23 07:09:05 crc kubenswrapper[4681]: I1123 07:09:05.687095 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxlj" event={"ID":"d1952f1b-33fa-4236-84be-4f840692c716","Type":"ContainerStarted","Data":"a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9"} Nov 23 07:09:05 crc kubenswrapper[4681]: I1123 07:09:05.702834 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7lxlj" podStartSLOduration=2.211775461 podStartE2EDuration="4.702819828s" podCreationTimestamp="2025-11-23 07:09:01 +0000 UTC" firstStartedPulling="2025-11-23 07:09:02.659944437 +0000 UTC m=+1479.729453674" lastFinishedPulling="2025-11-23 07:09:05.150988803 +0000 UTC m=+1482.220498041" observedRunningTime="2025-11-23 07:09:05.699176671 +0000 UTC m=+1482.768685909" watchObservedRunningTime="2025-11-23 07:09:05.702819828 +0000 UTC m=+1482.772329066" Nov 23 07:09:11 crc kubenswrapper[4681]: I1123 07:09:11.530030 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:11 crc kubenswrapper[4681]: I1123 07:09:11.530640 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:11 crc kubenswrapper[4681]: I1123 07:09:11.564755 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:11 crc kubenswrapper[4681]: I1123 07:09:11.769412 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:11 crc kubenswrapper[4681]: I1123 07:09:11.810396 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7lxlj"] Nov 23 07:09:12 crc kubenswrapper[4681]: I1123 07:09:12.296071 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:09:12 crc kubenswrapper[4681]: I1123 07:09:12.296116 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:09:13 crc kubenswrapper[4681]: I1123 07:09:13.749256 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7lxlj" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="registry-server" containerID="cri-o://a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9" gracePeriod=2 Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.123178 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.206247 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-catalog-content\") pod \"d1952f1b-33fa-4236-84be-4f840692c716\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.206495 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n24lq\" (UniqueName: \"kubernetes.io/projected/d1952f1b-33fa-4236-84be-4f840692c716-kube-api-access-n24lq\") pod \"d1952f1b-33fa-4236-84be-4f840692c716\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.206542 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-utilities\") pod \"d1952f1b-33fa-4236-84be-4f840692c716\" (UID: \"d1952f1b-33fa-4236-84be-4f840692c716\") " Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.210732 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-utilities" (OuterVolumeSpecName: "utilities") pod "d1952f1b-33fa-4236-84be-4f840692c716" (UID: "d1952f1b-33fa-4236-84be-4f840692c716"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.222881 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1952f1b-33fa-4236-84be-4f840692c716-kube-api-access-n24lq" (OuterVolumeSpecName: "kube-api-access-n24lq") pod "d1952f1b-33fa-4236-84be-4f840692c716" (UID: "d1952f1b-33fa-4236-84be-4f840692c716"). InnerVolumeSpecName "kube-api-access-n24lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.258709 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1952f1b-33fa-4236-84be-4f840692c716" (UID: "d1952f1b-33fa-4236-84be-4f840692c716"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.308499 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.308522 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n24lq\" (UniqueName: \"kubernetes.io/projected/d1952f1b-33fa-4236-84be-4f840692c716-kube-api-access-n24lq\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.308533 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1952f1b-33fa-4236-84be-4f840692c716-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.758835 4681 generic.go:334] "Generic (PLEG): container finished" podID="d1952f1b-33fa-4236-84be-4f840692c716" containerID="a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9" exitCode=0 Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.758894 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxlj" event={"ID":"d1952f1b-33fa-4236-84be-4f840692c716","Type":"ContainerDied","Data":"a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9"} Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.758925 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxlj" event={"ID":"d1952f1b-33fa-4236-84be-4f840692c716","Type":"ContainerDied","Data":"d946ffdd3ba11e8ad67a37824c0a31eb5d54a45e4884a4ef0a931a795f00cd58"} Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.758960 4681 scope.go:117] "RemoveContainer" containerID="a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.759667 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxlj" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.778365 4681 scope.go:117] "RemoveContainer" containerID="ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.787760 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7lxlj"] Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.793706 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7lxlj"] Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.807319 4681 scope.go:117] "RemoveContainer" containerID="480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.836138 4681 scope.go:117] "RemoveContainer" containerID="a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9" Nov 23 07:09:14 crc kubenswrapper[4681]: E1123 07:09:14.837853 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9\": container with ID starting with a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9 not found: ID does not exist" containerID="a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.837919 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9"} err="failed to get container status \"a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9\": rpc error: code = NotFound desc = could not find container \"a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9\": container with ID starting with a606bac84ac2ae8e024ed6208fc82d9a5495e94ebe09d13c185947a4110691b9 not found: ID does not exist" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.837951 4681 scope.go:117] "RemoveContainer" containerID="ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe" Nov 23 07:09:14 crc kubenswrapper[4681]: E1123 07:09:14.838386 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe\": container with ID starting with ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe not found: ID does not exist" containerID="ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.838419 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe"} err="failed to get container status \"ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe\": rpc error: code = NotFound desc = could not find container \"ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe\": container with ID starting with ddbde3560a04196cbb155f8230f28837bb2f3ed2d8c0d57a958de24af68483fe not found: ID does not exist" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.838436 4681 scope.go:117] "RemoveContainer" containerID="480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6" Nov 23 07:09:14 crc kubenswrapper[4681]: E1123 07:09:14.838697 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6\": container with ID starting with 480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6 not found: ID does not exist" containerID="480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6" Nov 23 07:09:14 crc kubenswrapper[4681]: I1123 07:09:14.838740 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6"} err="failed to get container status \"480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6\": rpc error: code = NotFound desc = could not find container \"480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6\": container with ID starting with 480e5c34a35a704f7e6c6fbc1446cbc45a6d03851991508f343ea047451411c6 not found: ID does not exist" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.260653 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1952f1b-33fa-4236-84be-4f840692c716" path="/var/lib/kubelet/pods/d1952f1b-33fa-4236-84be-4f840692c716/volumes" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.399765 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kcg5r"] Nov 23 07:09:15 crc kubenswrapper[4681]: E1123 07:09:15.400160 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="extract-utilities" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.400179 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="extract-utilities" Nov 23 07:09:15 crc kubenswrapper[4681]: E1123 07:09:15.400200 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="registry-server" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.400207 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="registry-server" Nov 23 07:09:15 crc kubenswrapper[4681]: E1123 07:09:15.400232 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="extract-content" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.400237 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="extract-content" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.400438 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1952f1b-33fa-4236-84be-4f840692c716" containerName="registry-server" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.401711 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.411921 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcg5r"] Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.524345 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-utilities\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.524431 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfxlz\" (UniqueName: \"kubernetes.io/projected/65c98798-b5fb-4cdf-866d-a3d1ba150f22-kube-api-access-wfxlz\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.524800 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-catalog-content\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.626166 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-utilities\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.626251 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfxlz\" (UniqueName: \"kubernetes.io/projected/65c98798-b5fb-4cdf-866d-a3d1ba150f22-kube-api-access-wfxlz\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.626273 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-catalog-content\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.626737 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-utilities\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.626764 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-catalog-content\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.642909 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfxlz\" (UniqueName: \"kubernetes.io/projected/65c98798-b5fb-4cdf-866d-a3d1ba150f22-kube-api-access-wfxlz\") pod \"redhat-operators-kcg5r\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:15 crc kubenswrapper[4681]: I1123 07:09:15.717233 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:16 crc kubenswrapper[4681]: I1123 07:09:16.132903 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcg5r"] Nov 23 07:09:16 crc kubenswrapper[4681]: I1123 07:09:16.777418 4681 generic.go:334] "Generic (PLEG): container finished" podID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerID="292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6" exitCode=0 Nov 23 07:09:16 crc kubenswrapper[4681]: I1123 07:09:16.777503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcg5r" event={"ID":"65c98798-b5fb-4cdf-866d-a3d1ba150f22","Type":"ContainerDied","Data":"292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6"} Nov 23 07:09:16 crc kubenswrapper[4681]: I1123 07:09:16.777534 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcg5r" event={"ID":"65c98798-b5fb-4cdf-866d-a3d1ba150f22","Type":"ContainerStarted","Data":"74c280a5456dfac45fa151ae3d33b19be7bc2023cbd764150ee6ef2b6c050669"} Nov 23 07:09:17 crc kubenswrapper[4681]: I1123 07:09:17.788180 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcg5r" event={"ID":"65c98798-b5fb-4cdf-866d-a3d1ba150f22","Type":"ContainerStarted","Data":"50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51"} Nov 23 07:09:19 crc kubenswrapper[4681]: I1123 07:09:19.804453 4681 generic.go:334] "Generic (PLEG): container finished" podID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerID="50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51" exitCode=0 Nov 23 07:09:19 crc kubenswrapper[4681]: I1123 07:09:19.804491 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcg5r" event={"ID":"65c98798-b5fb-4cdf-866d-a3d1ba150f22","Type":"ContainerDied","Data":"50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51"} Nov 23 07:09:20 crc kubenswrapper[4681]: I1123 07:09:20.814589 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcg5r" event={"ID":"65c98798-b5fb-4cdf-866d-a3d1ba150f22","Type":"ContainerStarted","Data":"735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b"} Nov 23 07:09:22 crc kubenswrapper[4681]: I1123 07:09:22.031523 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kcg5r" podStartSLOduration=3.568339475 podStartE2EDuration="7.031504571s" podCreationTimestamp="2025-11-23 07:09:15 +0000 UTC" firstStartedPulling="2025-11-23 07:09:16.78158585 +0000 UTC m=+1493.851095087" lastFinishedPulling="2025-11-23 07:09:20.244750946 +0000 UTC m=+1497.314260183" observedRunningTime="2025-11-23 07:09:20.838076901 +0000 UTC m=+1497.907586138" watchObservedRunningTime="2025-11-23 07:09:22.031504571 +0000 UTC m=+1499.101013808" Nov 23 07:09:22 crc kubenswrapper[4681]: I1123 07:09:22.035861 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-xbhpv"] Nov 23 07:09:22 crc kubenswrapper[4681]: I1123 07:09:22.039618 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-xbhpv"] Nov 23 07:09:23 crc kubenswrapper[4681]: I1123 07:09:23.261415 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc57e44-7957-4d3a-b9c9-2da622ea38a0" path="/var/lib/kubelet/pods/4cc57e44-7957-4d3a-b9c9-2da622ea38a0/volumes" Nov 23 07:09:24 crc kubenswrapper[4681]: I1123 07:09:24.841906 4681 generic.go:334] "Generic (PLEG): container finished" podID="477f2017-bbac-4d93-8be6-703fc200c9ed" containerID="2845743b21f0c5a97c21c4030b31b71ec2eebbf5de2964d194ce3c50ff1c912a" exitCode=0 Nov 23 07:09:24 crc kubenswrapper[4681]: I1123 07:09:24.842081 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" event={"ID":"477f2017-bbac-4d93-8be6-703fc200c9ed","Type":"ContainerDied","Data":"2845743b21f0c5a97c21c4030b31b71ec2eebbf5de2964d194ce3c50ff1c912a"} Nov 23 07:09:25 crc kubenswrapper[4681]: I1123 07:09:25.718194 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:25 crc kubenswrapper[4681]: I1123 07:09:25.718243 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.213281 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.409422 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-ssh-key\") pod \"477f2017-bbac-4d93-8be6-703fc200c9ed\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.409778 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-inventory\") pod \"477f2017-bbac-4d93-8be6-703fc200c9ed\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.409976 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md626\" (UniqueName: \"kubernetes.io/projected/477f2017-bbac-4d93-8be6-703fc200c9ed-kube-api-access-md626\") pod \"477f2017-bbac-4d93-8be6-703fc200c9ed\" (UID: \"477f2017-bbac-4d93-8be6-703fc200c9ed\") " Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.415401 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477f2017-bbac-4d93-8be6-703fc200c9ed-kube-api-access-md626" (OuterVolumeSpecName: "kube-api-access-md626") pod "477f2017-bbac-4d93-8be6-703fc200c9ed" (UID: "477f2017-bbac-4d93-8be6-703fc200c9ed"). InnerVolumeSpecName "kube-api-access-md626". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.432640 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-inventory" (OuterVolumeSpecName: "inventory") pod "477f2017-bbac-4d93-8be6-703fc200c9ed" (UID: "477f2017-bbac-4d93-8be6-703fc200c9ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.434270 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "477f2017-bbac-4d93-8be6-703fc200c9ed" (UID: "477f2017-bbac-4d93-8be6-703fc200c9ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.512348 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.512385 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477f2017-bbac-4d93-8be6-703fc200c9ed-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.512396 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md626\" (UniqueName: \"kubernetes.io/projected/477f2017-bbac-4d93-8be6-703fc200c9ed-kube-api-access-md626\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.755218 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kcg5r" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="registry-server" probeResult="failure" output=< Nov 23 07:09:26 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:09:26 crc kubenswrapper[4681]: > Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.859907 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" event={"ID":"477f2017-bbac-4d93-8be6-703fc200c9ed","Type":"ContainerDied","Data":"c64fb9478eeed108078b2c47d36d452b9ca5ea4b71e619665fe4018b732938ab"} Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.860144 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64fb9478eeed108078b2c47d36d452b9ca5ea4b71e619665fe4018b732938ab" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.860164 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7krxn" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.925541 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c"] Nov 23 07:09:26 crc kubenswrapper[4681]: E1123 07:09:26.925919 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477f2017-bbac-4d93-8be6-703fc200c9ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.925939 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="477f2017-bbac-4d93-8be6-703fc200c9ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.926128 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="477f2017-bbac-4d93-8be6-703fc200c9ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.927608 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.931508 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.931665 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.931773 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.931911 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:09:26 crc kubenswrapper[4681]: I1123 07:09:26.935802 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c"] Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.123676 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m6z4\" (UniqueName: \"kubernetes.io/projected/72bc477d-1846-4f12-94e3-3aea316bbf98-kube-api-access-6m6z4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.124061 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.124248 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.227371 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.227522 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.227652 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m6z4\" (UniqueName: \"kubernetes.io/projected/72bc477d-1846-4f12-94e3-3aea316bbf98-kube-api-access-6m6z4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.232963 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.234289 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.241767 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m6z4\" (UniqueName: \"kubernetes.io/projected/72bc477d-1846-4f12-94e3-3aea316bbf98-kube-api-access-6m6z4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.243687 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.719664 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c"] Nov 23 07:09:27 crc kubenswrapper[4681]: I1123 07:09:27.869786 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" event={"ID":"72bc477d-1846-4f12-94e3-3aea316bbf98","Type":"ContainerStarted","Data":"97baa85a356a437b926aaed37f9fd8eb191f7d96443478e0e5458d900aad4518"} Nov 23 07:09:28 crc kubenswrapper[4681]: I1123 07:09:28.879099 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" event={"ID":"72bc477d-1846-4f12-94e3-3aea316bbf98","Type":"ContainerStarted","Data":"3bf55c3c044f7526462eec920daaa5d12577cf4df1503c789f903292098d1cf0"} Nov 23 07:09:28 crc kubenswrapper[4681]: I1123 07:09:28.893070 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" podStartSLOduration=2.433783348 podStartE2EDuration="2.89304612s" podCreationTimestamp="2025-11-23 07:09:26 +0000 UTC" firstStartedPulling="2025-11-23 07:09:27.727079248 +0000 UTC m=+1504.796588485" lastFinishedPulling="2025-11-23 07:09:28.18634202 +0000 UTC m=+1505.255851257" observedRunningTime="2025-11-23 07:09:28.892484853 +0000 UTC m=+1505.961994090" watchObservedRunningTime="2025-11-23 07:09:28.89304612 +0000 UTC m=+1505.962555358" Nov 23 07:09:31 crc kubenswrapper[4681]: I1123 07:09:31.020338 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qn8qf"] Nov 23 07:09:31 crc kubenswrapper[4681]: I1123 07:09:31.029271 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qn8qf"] Nov 23 07:09:31 crc kubenswrapper[4681]: I1123 07:09:31.263909 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fd09f2-734b-4427-8b5b-65711b24bbb5" path="/var/lib/kubelet/pods/31fd09f2-734b-4427-8b5b-65711b24bbb5/volumes" Nov 23 07:09:33 crc kubenswrapper[4681]: I1123 07:09:33.025678 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b8mwm"] Nov 23 07:09:33 crc kubenswrapper[4681]: I1123 07:09:33.035579 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b8mwm"] Nov 23 07:09:33 crc kubenswrapper[4681]: I1123 07:09:33.260583 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd5ce32-831b-448a-943f-7e3250ca172b" path="/var/lib/kubelet/pods/5dd5ce32-831b-448a-943f-7e3250ca172b/volumes" Nov 23 07:09:35 crc kubenswrapper[4681]: I1123 07:09:35.759141 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:35 crc kubenswrapper[4681]: I1123 07:09:35.801118 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:35 crc kubenswrapper[4681]: I1123 07:09:35.991046 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcg5r"] Nov 23 07:09:36 crc kubenswrapper[4681]: I1123 07:09:36.940561 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kcg5r" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="registry-server" containerID="cri-o://735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b" gracePeriod=2 Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.360701 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.530646 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfxlz\" (UniqueName: \"kubernetes.io/projected/65c98798-b5fb-4cdf-866d-a3d1ba150f22-kube-api-access-wfxlz\") pod \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.530706 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-catalog-content\") pod \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.531813 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-utilities\") pod \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\" (UID: \"65c98798-b5fb-4cdf-866d-a3d1ba150f22\") " Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.532321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-utilities" (OuterVolumeSpecName: "utilities") pod "65c98798-b5fb-4cdf-866d-a3d1ba150f22" (UID: "65c98798-b5fb-4cdf-866d-a3d1ba150f22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.533616 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.536892 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c98798-b5fb-4cdf-866d-a3d1ba150f22-kube-api-access-wfxlz" (OuterVolumeSpecName: "kube-api-access-wfxlz") pod "65c98798-b5fb-4cdf-866d-a3d1ba150f22" (UID: "65c98798-b5fb-4cdf-866d-a3d1ba150f22"). InnerVolumeSpecName "kube-api-access-wfxlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.593081 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65c98798-b5fb-4cdf-866d-a3d1ba150f22" (UID: "65c98798-b5fb-4cdf-866d-a3d1ba150f22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.635876 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfxlz\" (UniqueName: \"kubernetes.io/projected/65c98798-b5fb-4cdf-866d-a3d1ba150f22-kube-api-access-wfxlz\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.635913 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c98798-b5fb-4cdf-866d-a3d1ba150f22-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.955433 4681 generic.go:334] "Generic (PLEG): container finished" podID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerID="735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b" exitCode=0 Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.955519 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcg5r" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.955510 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcg5r" event={"ID":"65c98798-b5fb-4cdf-866d-a3d1ba150f22","Type":"ContainerDied","Data":"735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b"} Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.956535 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcg5r" event={"ID":"65c98798-b5fb-4cdf-866d-a3d1ba150f22","Type":"ContainerDied","Data":"74c280a5456dfac45fa151ae3d33b19be7bc2023cbd764150ee6ef2b6c050669"} Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.956566 4681 scope.go:117] "RemoveContainer" containerID="735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.976878 4681 scope.go:117] "RemoveContainer" containerID="50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51" Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.986878 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcg5r"] Nov 23 07:09:37 crc kubenswrapper[4681]: I1123 07:09:37.993998 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kcg5r"] Nov 23 07:09:38 crc kubenswrapper[4681]: I1123 07:09:38.013269 4681 scope.go:117] "RemoveContainer" containerID="292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6" Nov 23 07:09:38 crc kubenswrapper[4681]: I1123 07:09:38.044532 4681 scope.go:117] "RemoveContainer" containerID="735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b" Nov 23 07:09:38 crc kubenswrapper[4681]: E1123 07:09:38.044868 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b\": container with ID starting with 735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b not found: ID does not exist" containerID="735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b" Nov 23 07:09:38 crc kubenswrapper[4681]: I1123 07:09:38.044970 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b"} err="failed to get container status \"735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b\": rpc error: code = NotFound desc = could not find container \"735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b\": container with ID starting with 735000af8ea8803844eeeb9d0bcb6289c6f265c40fceb6ca98cac137e86f706b not found: ID does not exist" Nov 23 07:09:38 crc kubenswrapper[4681]: I1123 07:09:38.045062 4681 scope.go:117] "RemoveContainer" containerID="50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51" Nov 23 07:09:38 crc kubenswrapper[4681]: E1123 07:09:38.045598 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51\": container with ID starting with 50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51 not found: ID does not exist" containerID="50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51" Nov 23 07:09:38 crc kubenswrapper[4681]: I1123 07:09:38.045633 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51"} err="failed to get container status \"50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51\": rpc error: code = NotFound desc = could not find container \"50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51\": container with ID starting with 50d55aa44100a2bde3e30e1c64c99fadf900c531963f572c974f3ae5ccc53a51 not found: ID does not exist" Nov 23 07:09:38 crc kubenswrapper[4681]: I1123 07:09:38.045655 4681 scope.go:117] "RemoveContainer" containerID="292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6" Nov 23 07:09:38 crc kubenswrapper[4681]: E1123 07:09:38.045989 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6\": container with ID starting with 292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6 not found: ID does not exist" containerID="292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6" Nov 23 07:09:38 crc kubenswrapper[4681]: I1123 07:09:38.046119 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6"} err="failed to get container status \"292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6\": rpc error: code = NotFound desc = could not find container \"292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6\": container with ID starting with 292e4a0d6d4b4f3b8d1e56a741a66e954bd65f9fa8057eea493d3bff7ab4add6 not found: ID does not exist" Nov 23 07:09:39 crc kubenswrapper[4681]: I1123 07:09:39.263390 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" path="/var/lib/kubelet/pods/65c98798-b5fb-4cdf-866d-a3d1ba150f22/volumes" Nov 23 07:09:42 crc kubenswrapper[4681]: I1123 07:09:42.295569 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:09:42 crc kubenswrapper[4681]: I1123 07:09:42.295957 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:09:42 crc kubenswrapper[4681]: I1123 07:09:42.296025 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:09:42 crc kubenswrapper[4681]: I1123 07:09:42.296731 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:09:42 crc kubenswrapper[4681]: I1123 07:09:42.296796 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" gracePeriod=600 Nov 23 07:09:42 crc kubenswrapper[4681]: E1123 07:09:42.414804 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:09:43 crc kubenswrapper[4681]: I1123 07:09:43.009298 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" exitCode=0 Nov 23 07:09:43 crc kubenswrapper[4681]: I1123 07:09:43.009739 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c"} Nov 23 07:09:43 crc kubenswrapper[4681]: I1123 07:09:43.009790 4681 scope.go:117] "RemoveContainer" containerID="411d710baa479cd25651d571408d129f643d8f5da14108264248611d2aa6b0dc" Nov 23 07:09:43 crc kubenswrapper[4681]: I1123 07:09:43.010878 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:09:43 crc kubenswrapper[4681]: E1123 07:09:43.011324 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:09:43 crc kubenswrapper[4681]: I1123 07:09:43.036358 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-fbbdq"] Nov 23 07:09:43 crc kubenswrapper[4681]: I1123 07:09:43.043757 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-fbbdq"] Nov 23 07:09:43 crc kubenswrapper[4681]: I1123 07:09:43.262368 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00916d9f-8ce3-47d9-a32f-e2deb3514ede" path="/var/lib/kubelet/pods/00916d9f-8ce3-47d9-a32f-e2deb3514ede/volumes" Nov 23 07:09:45 crc kubenswrapper[4681]: I1123 07:09:45.225408 4681 scope.go:117] "RemoveContainer" containerID="fbfbecec9249e290de376cecaf8ce397d63bedb12a63815fac8bc51df3bfbd1f" Nov 23 07:09:45 crc kubenswrapper[4681]: I1123 07:09:45.258483 4681 scope.go:117] "RemoveContainer" containerID="14bbe87d6009d7ad711ca056bfca4be6c099751bca74e516b1ae4373f0158ce1" Nov 23 07:09:45 crc kubenswrapper[4681]: I1123 07:09:45.282022 4681 scope.go:117] "RemoveContainer" containerID="700143e51614cc36002f36973047a5a76880f5c51daf61a8817aff73f0aaa8b6" Nov 23 07:09:45 crc kubenswrapper[4681]: I1123 07:09:45.310990 4681 scope.go:117] "RemoveContainer" containerID="8ce7b2fe24a9d0caf784ebd0d3fb31784d0f1791fd7508361e3ae865f66ef071" Nov 23 07:09:45 crc kubenswrapper[4681]: I1123 07:09:45.355555 4681 scope.go:117] "RemoveContainer" containerID="3295b4bd261ee97327198f543bfa8e15d7d22cf8363d391dcfb4f63e8553275a" Nov 23 07:09:48 crc kubenswrapper[4681]: I1123 07:09:48.029285 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-frn6w"] Nov 23 07:09:48 crc kubenswrapper[4681]: I1123 07:09:48.037034 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-frn6w"] Nov 23 07:09:49 crc kubenswrapper[4681]: I1123 07:09:49.262121 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e9b025-0fa7-4a41-a18c-e4f078b82c43" path="/var/lib/kubelet/pods/95e9b025-0fa7-4a41-a18c-e4f078b82c43/volumes" Nov 23 07:09:56 crc kubenswrapper[4681]: I1123 07:09:56.024336 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4gs5w"] Nov 23 07:09:56 crc kubenswrapper[4681]: I1123 07:09:56.030891 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4gs5w"] Nov 23 07:09:56 crc kubenswrapper[4681]: I1123 07:09:56.252078 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:09:56 crc kubenswrapper[4681]: E1123 07:09:56.252339 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:09:57 crc kubenswrapper[4681]: I1123 07:09:57.263904 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d426ed81-18f9-441e-9865-b9a6d683931f" path="/var/lib/kubelet/pods/d426ed81-18f9-441e-9865-b9a6d683931f/volumes" Nov 23 07:10:09 crc kubenswrapper[4681]: I1123 07:10:09.252385 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:10:09 crc kubenswrapper[4681]: E1123 07:10:09.253448 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:10:24 crc kubenswrapper[4681]: I1123 07:10:24.251768 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:10:24 crc kubenswrapper[4681]: E1123 07:10:24.252389 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:10:27 crc kubenswrapper[4681]: I1123 07:10:27.326282 4681 generic.go:334] "Generic (PLEG): container finished" podID="72bc477d-1846-4f12-94e3-3aea316bbf98" containerID="3bf55c3c044f7526462eec920daaa5d12577cf4df1503c789f903292098d1cf0" exitCode=0 Nov 23 07:10:27 crc kubenswrapper[4681]: I1123 07:10:27.326298 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" event={"ID":"72bc477d-1846-4f12-94e3-3aea316bbf98","Type":"ContainerDied","Data":"3bf55c3c044f7526462eec920daaa5d12577cf4df1503c789f903292098d1cf0"} Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.658042 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.819484 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-inventory\") pod \"72bc477d-1846-4f12-94e3-3aea316bbf98\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.819549 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m6z4\" (UniqueName: \"kubernetes.io/projected/72bc477d-1846-4f12-94e3-3aea316bbf98-kube-api-access-6m6z4\") pod \"72bc477d-1846-4f12-94e3-3aea316bbf98\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.819570 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-ssh-key\") pod \"72bc477d-1846-4f12-94e3-3aea316bbf98\" (UID: \"72bc477d-1846-4f12-94e3-3aea316bbf98\") " Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.826022 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72bc477d-1846-4f12-94e3-3aea316bbf98-kube-api-access-6m6z4" (OuterVolumeSpecName: "kube-api-access-6m6z4") pod "72bc477d-1846-4f12-94e3-3aea316bbf98" (UID: "72bc477d-1846-4f12-94e3-3aea316bbf98"). InnerVolumeSpecName "kube-api-access-6m6z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.844654 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-inventory" (OuterVolumeSpecName: "inventory") pod "72bc477d-1846-4f12-94e3-3aea316bbf98" (UID: "72bc477d-1846-4f12-94e3-3aea316bbf98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.845237 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "72bc477d-1846-4f12-94e3-3aea316bbf98" (UID: "72bc477d-1846-4f12-94e3-3aea316bbf98"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.923452 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.923503 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m6z4\" (UniqueName: \"kubernetes.io/projected/72bc477d-1846-4f12-94e3-3aea316bbf98-kube-api-access-6m6z4\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[4681]: I1123 07:10:28.923516 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72bc477d-1846-4f12-94e3-3aea316bbf98-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.344912 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" event={"ID":"72bc477d-1846-4f12-94e3-3aea316bbf98","Type":"ContainerDied","Data":"97baa85a356a437b926aaed37f9fd8eb191f7d96443478e0e5458d900aad4518"} Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.344949 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qg4c" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.344952 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97baa85a356a437b926aaed37f9fd8eb191f7d96443478e0e5458d900aad4518" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.420093 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh"] Nov 23 07:10:29 crc kubenswrapper[4681]: E1123 07:10:29.420514 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="extract-content" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.420534 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="extract-content" Nov 23 07:10:29 crc kubenswrapper[4681]: E1123 07:10:29.420555 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="extract-utilities" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.420561 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="extract-utilities" Nov 23 07:10:29 crc kubenswrapper[4681]: E1123 07:10:29.420570 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="registry-server" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.420575 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="registry-server" Nov 23 07:10:29 crc kubenswrapper[4681]: E1123 07:10:29.420600 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bc477d-1846-4f12-94e3-3aea316bbf98" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.420607 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bc477d-1846-4f12-94e3-3aea316bbf98" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.420806 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="72bc477d-1846-4f12-94e3-3aea316bbf98" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.420826 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c98798-b5fb-4cdf-866d-a3d1ba150f22" containerName="registry-server" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.421483 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.428799 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.428805 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.431022 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.439838 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.448169 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh"] Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.535849 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.535960 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj2gr\" (UniqueName: \"kubernetes.io/projected/6b100186-38e1-43eb-98f1-960b92f8a564-kube-api-access-jj2gr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.536075 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.637266 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj2gr\" (UniqueName: \"kubernetes.io/projected/6b100186-38e1-43eb-98f1-960b92f8a564-kube-api-access-jj2gr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.637336 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.637501 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.644429 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.644739 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.652204 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj2gr\" (UniqueName: \"kubernetes.io/projected/6b100186-38e1-43eb-98f1-960b92f8a564-kube-api-access-jj2gr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:29 crc kubenswrapper[4681]: I1123 07:10:29.741349 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:30 crc kubenswrapper[4681]: I1123 07:10:30.209967 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh"] Nov 23 07:10:30 crc kubenswrapper[4681]: I1123 07:10:30.354575 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" event={"ID":"6b100186-38e1-43eb-98f1-960b92f8a564","Type":"ContainerStarted","Data":"d820204c91ab12a610226ada15ee4d3b44ccdbacb78a34d0aec8c69234710502"} Nov 23 07:10:31 crc kubenswrapper[4681]: I1123 07:10:31.362579 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" event={"ID":"6b100186-38e1-43eb-98f1-960b92f8a564","Type":"ContainerStarted","Data":"4d4388fd7645226a410b1b7b8cd9d56f413c017b2ad126eaf374a777c5fa8d0b"} Nov 23 07:10:31 crc kubenswrapper[4681]: I1123 07:10:31.378904 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" podStartSLOduration=1.86906178 podStartE2EDuration="2.378888033s" podCreationTimestamp="2025-11-23 07:10:29 +0000 UTC" firstStartedPulling="2025-11-23 07:10:30.222762133 +0000 UTC m=+1567.292271371" lastFinishedPulling="2025-11-23 07:10:30.732588387 +0000 UTC m=+1567.802097624" observedRunningTime="2025-11-23 07:10:31.373322813 +0000 UTC m=+1568.442832050" watchObservedRunningTime="2025-11-23 07:10:31.378888033 +0000 UTC m=+1568.448397270" Nov 23 07:10:35 crc kubenswrapper[4681]: I1123 07:10:35.391867 4681 generic.go:334] "Generic (PLEG): container finished" podID="6b100186-38e1-43eb-98f1-960b92f8a564" containerID="4d4388fd7645226a410b1b7b8cd9d56f413c017b2ad126eaf374a777c5fa8d0b" exitCode=0 Nov 23 07:10:35 crc kubenswrapper[4681]: I1123 07:10:35.391918 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" event={"ID":"6b100186-38e1-43eb-98f1-960b92f8a564","Type":"ContainerDied","Data":"4d4388fd7645226a410b1b7b8cd9d56f413c017b2ad126eaf374a777c5fa8d0b"} Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.252650 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:10:36 crc kubenswrapper[4681]: E1123 07:10:36.252911 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.687823 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.875894 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-ssh-key\") pod \"6b100186-38e1-43eb-98f1-960b92f8a564\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.876048 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj2gr\" (UniqueName: \"kubernetes.io/projected/6b100186-38e1-43eb-98f1-960b92f8a564-kube-api-access-jj2gr\") pod \"6b100186-38e1-43eb-98f1-960b92f8a564\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.876195 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-inventory\") pod \"6b100186-38e1-43eb-98f1-960b92f8a564\" (UID: \"6b100186-38e1-43eb-98f1-960b92f8a564\") " Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.881572 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b100186-38e1-43eb-98f1-960b92f8a564-kube-api-access-jj2gr" (OuterVolumeSpecName: "kube-api-access-jj2gr") pod "6b100186-38e1-43eb-98f1-960b92f8a564" (UID: "6b100186-38e1-43eb-98f1-960b92f8a564"). InnerVolumeSpecName "kube-api-access-jj2gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.899686 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-inventory" (OuterVolumeSpecName: "inventory") pod "6b100186-38e1-43eb-98f1-960b92f8a564" (UID: "6b100186-38e1-43eb-98f1-960b92f8a564"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.900981 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6b100186-38e1-43eb-98f1-960b92f8a564" (UID: "6b100186-38e1-43eb-98f1-960b92f8a564"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.978574 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj2gr\" (UniqueName: \"kubernetes.io/projected/6b100186-38e1-43eb-98f1-960b92f8a564-kube-api-access-jj2gr\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.978603 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:36 crc kubenswrapper[4681]: I1123 07:10:36.978612 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b100186-38e1-43eb-98f1-960b92f8a564-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.406212 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" event={"ID":"6b100186-38e1-43eb-98f1-960b92f8a564","Type":"ContainerDied","Data":"d820204c91ab12a610226ada15ee4d3b44ccdbacb78a34d0aec8c69234710502"} Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.406250 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfhrh" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.406251 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d820204c91ab12a610226ada15ee4d3b44ccdbacb78a34d0aec8c69234710502" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.473341 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8"] Nov 23 07:10:37 crc kubenswrapper[4681]: E1123 07:10:37.473811 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b100186-38e1-43eb-98f1-960b92f8a564" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.473834 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b100186-38e1-43eb-98f1-960b92f8a564" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.474031 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b100186-38e1-43eb-98f1-960b92f8a564" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.475524 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.478390 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.478568 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.478629 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.480284 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8"] Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.483154 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.588431 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vbb\" (UniqueName: \"kubernetes.io/projected/00739cfe-415b-4bf0-8648-c93631b42f67-kube-api-access-k2vbb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.588625 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.588834 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.690954 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2vbb\" (UniqueName: \"kubernetes.io/projected/00739cfe-415b-4bf0-8648-c93631b42f67-kube-api-access-k2vbb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.691017 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.691087 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.695275 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.695789 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.705122 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2vbb\" (UniqueName: \"kubernetes.io/projected/00739cfe-415b-4bf0-8648-c93631b42f67-kube-api-access-k2vbb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2p7t8\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:37 crc kubenswrapper[4681]: I1123 07:10:37.790359 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:10:38 crc kubenswrapper[4681]: I1123 07:10:38.251725 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8"] Nov 23 07:10:38 crc kubenswrapper[4681]: I1123 07:10:38.412551 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" event={"ID":"00739cfe-415b-4bf0-8648-c93631b42f67","Type":"ContainerStarted","Data":"ea698ae95fe4eb29a0b156fec34f362bf937559aa0ef4089379a0640843dbffe"} Nov 23 07:10:39 crc kubenswrapper[4681]: I1123 07:10:39.420373 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" event={"ID":"00739cfe-415b-4bf0-8648-c93631b42f67","Type":"ContainerStarted","Data":"8690d5dfb87b412fd30f60530926d25a9c1c4044666281a9a412b9b9711a3c52"} Nov 23 07:10:45 crc kubenswrapper[4681]: I1123 07:10:45.486112 4681 scope.go:117] "RemoveContainer" containerID="69c8d2488fbc645452db6c62e7d3c880fa1d9652016d1c23f5b202c3444a34bd" Nov 23 07:10:45 crc kubenswrapper[4681]: I1123 07:10:45.514675 4681 scope.go:117] "RemoveContainer" containerID="d9b70e48c34aa62c0c87f47450bc2d43c1752010c23fc9f615afbd1eaf7f6873" Nov 23 07:10:51 crc kubenswrapper[4681]: I1123 07:10:51.251549 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:10:51 crc kubenswrapper[4681]: E1123 07:10:51.252376 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.041850 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" podStartSLOduration=19.461038562 podStartE2EDuration="20.041828427s" podCreationTimestamp="2025-11-23 07:10:37 +0000 UTC" firstStartedPulling="2025-11-23 07:10:38.254412438 +0000 UTC m=+1575.323921675" lastFinishedPulling="2025-11-23 07:10:38.835202302 +0000 UTC m=+1575.904711540" observedRunningTime="2025-11-23 07:10:39.438033313 +0000 UTC m=+1576.507542550" watchObservedRunningTime="2025-11-23 07:10:57.041828427 +0000 UTC m=+1594.111337664" Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.043339 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-679e-account-create-dr6pd"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.050389 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7hksl"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.056025 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rnsgf"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.060907 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7hksl"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.066013 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a6d6-account-create-x9xqf"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.070743 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-679e-account-create-dr6pd"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.076396 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rnsgf"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.081046 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2lvqj"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.085598 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-11a1-account-create-9jrm5"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.114821 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-11a1-account-create-9jrm5"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.127223 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a6d6-account-create-x9xqf"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.134785 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2lvqj"] Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.264342 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3811c7f9-b5f1-4f7c-a839-4c01f37baaf2" path="/var/lib/kubelet/pods/3811c7f9-b5f1-4f7c-a839-4c01f37baaf2/volumes" Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.265677 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585ba06a-f87a-4133-a144-72545525b9a7" path="/var/lib/kubelet/pods/585ba06a-f87a-4133-a144-72545525b9a7/volumes" Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.267010 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1408c9-082d-4560-b82d-4d6b1124d6a5" path="/var/lib/kubelet/pods/7a1408c9-082d-4560-b82d-4d6b1124d6a5/volumes" Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.267686 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af391774-4ff4-48c7-a0ec-e11a85d772d5" path="/var/lib/kubelet/pods/af391774-4ff4-48c7-a0ec-e11a85d772d5/volumes" Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.269214 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a" path="/var/lib/kubelet/pods/c71fa3e0-58d0-4f10-8bd9-53048c7dbe4a/volumes" Nov 23 07:10:57 crc kubenswrapper[4681]: I1123 07:10:57.270205 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3318c1e-062e-4748-b6e3-8db9ef610c97" path="/var/lib/kubelet/pods/e3318c1e-062e-4748-b6e3-8db9ef610c97/volumes" Nov 23 07:11:04 crc kubenswrapper[4681]: I1123 07:11:04.252398 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:11:04 crc kubenswrapper[4681]: E1123 07:11:04.253295 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:11:08 crc kubenswrapper[4681]: I1123 07:11:08.647868 4681 generic.go:334] "Generic (PLEG): container finished" podID="00739cfe-415b-4bf0-8648-c93631b42f67" containerID="8690d5dfb87b412fd30f60530926d25a9c1c4044666281a9a412b9b9711a3c52" exitCode=0 Nov 23 07:11:08 crc kubenswrapper[4681]: I1123 07:11:08.647980 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" event={"ID":"00739cfe-415b-4bf0-8648-c93631b42f67","Type":"ContainerDied","Data":"8690d5dfb87b412fd30f60530926d25a9c1c4044666281a9a412b9b9711a3c52"} Nov 23 07:11:09 crc kubenswrapper[4681]: I1123 07:11:09.992967 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.117703 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-ssh-key\") pod \"00739cfe-415b-4bf0-8648-c93631b42f67\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.117776 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-inventory\") pod \"00739cfe-415b-4bf0-8648-c93631b42f67\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.118215 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2vbb\" (UniqueName: \"kubernetes.io/projected/00739cfe-415b-4bf0-8648-c93631b42f67-kube-api-access-k2vbb\") pod \"00739cfe-415b-4bf0-8648-c93631b42f67\" (UID: \"00739cfe-415b-4bf0-8648-c93631b42f67\") " Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.132819 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00739cfe-415b-4bf0-8648-c93631b42f67-kube-api-access-k2vbb" (OuterVolumeSpecName: "kube-api-access-k2vbb") pod "00739cfe-415b-4bf0-8648-c93631b42f67" (UID: "00739cfe-415b-4bf0-8648-c93631b42f67"). InnerVolumeSpecName "kube-api-access-k2vbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.141696 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "00739cfe-415b-4bf0-8648-c93631b42f67" (UID: "00739cfe-415b-4bf0-8648-c93631b42f67"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.150725 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-inventory" (OuterVolumeSpecName: "inventory") pod "00739cfe-415b-4bf0-8648-c93631b42f67" (UID: "00739cfe-415b-4bf0-8648-c93631b42f67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.221767 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.221803 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00739cfe-415b-4bf0-8648-c93631b42f67-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.221815 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2vbb\" (UniqueName: \"kubernetes.io/projected/00739cfe-415b-4bf0-8648-c93631b42f67-kube-api-access-k2vbb\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.670192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" event={"ID":"00739cfe-415b-4bf0-8648-c93631b42f67","Type":"ContainerDied","Data":"ea698ae95fe4eb29a0b156fec34f362bf937559aa0ef4089379a0640843dbffe"} Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.670229 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2p7t8" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.670238 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea698ae95fe4eb29a0b156fec34f362bf937559aa0ef4089379a0640843dbffe" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.761619 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8"] Nov 23 07:11:10 crc kubenswrapper[4681]: E1123 07:11:10.762130 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00739cfe-415b-4bf0-8648-c93631b42f67" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.762151 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="00739cfe-415b-4bf0-8648-c93631b42f67" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.762434 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="00739cfe-415b-4bf0-8648-c93631b42f67" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.763298 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.767349 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.767778 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.768112 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.768306 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.775302 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8"] Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.937575 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.937902 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xjj\" (UniqueName: \"kubernetes.io/projected/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-kube-api-access-c2xjj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:10 crc kubenswrapper[4681]: I1123 07:11:10.938352 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.040779 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2xjj\" (UniqueName: \"kubernetes.io/projected/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-kube-api-access-c2xjj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.040901 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.041108 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.046547 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.052517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.057923 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2xjj\" (UniqueName: \"kubernetes.io/projected/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-kube-api-access-c2xjj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.082399 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.562833 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8"] Nov 23 07:11:11 crc kubenswrapper[4681]: I1123 07:11:11.678594 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" event={"ID":"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03","Type":"ContainerStarted","Data":"ca744ff99eec2c569080430ee6b70e573ad7ec9a9afab87eafc7c739fee2b22b"} Nov 23 07:11:12 crc kubenswrapper[4681]: I1123 07:11:12.691165 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" event={"ID":"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03","Type":"ContainerStarted","Data":"9505344e17390d02764273eae22254f857c54b15ef9fa62ea471a2cbb14c6bce"} Nov 23 07:11:12 crc kubenswrapper[4681]: I1123 07:11:12.711832 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" podStartSLOduration=2.23703202 podStartE2EDuration="2.711812905s" podCreationTimestamp="2025-11-23 07:11:10 +0000 UTC" firstStartedPulling="2025-11-23 07:11:11.56863372 +0000 UTC m=+1608.638142957" lastFinishedPulling="2025-11-23 07:11:12.043414606 +0000 UTC m=+1609.112923842" observedRunningTime="2025-11-23 07:11:12.708324721 +0000 UTC m=+1609.777833958" watchObservedRunningTime="2025-11-23 07:11:12.711812905 +0000 UTC m=+1609.781322142" Nov 23 07:11:18 crc kubenswrapper[4681]: I1123 07:11:18.252168 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:11:18 crc kubenswrapper[4681]: E1123 07:11:18.253671 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:11:26 crc kubenswrapper[4681]: I1123 07:11:26.035995 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbt7w"] Nov 23 07:11:26 crc kubenswrapper[4681]: I1123 07:11:26.040551 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbt7w"] Nov 23 07:11:27 crc kubenswrapper[4681]: I1123 07:11:27.260978 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8667103d-4a0c-4396-a403-d4be07f276cf" path="/var/lib/kubelet/pods/8667103d-4a0c-4396-a403-d4be07f276cf/volumes" Nov 23 07:11:33 crc kubenswrapper[4681]: I1123 07:11:33.256268 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:11:33 crc kubenswrapper[4681]: E1123 07:11:33.257243 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:11:45 crc kubenswrapper[4681]: I1123 07:11:45.601804 4681 scope.go:117] "RemoveContainer" containerID="4aa5d9f187537748e09310e6b8a577328c9bbcd9a5d12cebaca849b4df68534a" Nov 23 07:11:45 crc kubenswrapper[4681]: I1123 07:11:45.624202 4681 scope.go:117] "RemoveContainer" containerID="79101d9508bf5f5c66972e162c27631ae5850a756be29d2b85a8a3ce7cdf3679" Nov 23 07:11:45 crc kubenswrapper[4681]: I1123 07:11:45.665296 4681 scope.go:117] "RemoveContainer" containerID="5aae89a237e02aadb5f3ea4821ce5addfda55baa3f3081b1c4594f9048b8cc51" Nov 23 07:11:45 crc kubenswrapper[4681]: I1123 07:11:45.691561 4681 scope.go:117] "RemoveContainer" containerID="cbd1f08432329486fd56f087fa8782900772a396a9361c013495b0b9048ec87d" Nov 23 07:11:45 crc kubenswrapper[4681]: I1123 07:11:45.726590 4681 scope.go:117] "RemoveContainer" containerID="3a04dbefcd352a6b87bc8695b79e3b41abbf89680e4f1896c073b52fbb80c9e2" Nov 23 07:11:45 crc kubenswrapper[4681]: I1123 07:11:45.762807 4681 scope.go:117] "RemoveContainer" containerID="5dc819c39d1aa87cd07aba4c5b2322043ce97f4b208163f9a901b77977345c5f" Nov 23 07:11:45 crc kubenswrapper[4681]: I1123 07:11:45.790210 4681 scope.go:117] "RemoveContainer" containerID="991f745a0105a1be16fe82e34bbf424b0402e409ba06479813cce39477a68c43" Nov 23 07:11:46 crc kubenswrapper[4681]: I1123 07:11:46.025480 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sl56s"] Nov 23 07:11:46 crc kubenswrapper[4681]: I1123 07:11:46.030617 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sl56s"] Nov 23 07:11:47 crc kubenswrapper[4681]: I1123 07:11:47.265135 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb" path="/var/lib/kubelet/pods/b82f9a8e-19a6-42eb-93a1-6fa5312fb0cb/volumes" Nov 23 07:11:48 crc kubenswrapper[4681]: I1123 07:11:48.035541 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-n5wff"] Nov 23 07:11:48 crc kubenswrapper[4681]: I1123 07:11:48.065055 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-n5wff"] Nov 23 07:11:48 crc kubenswrapper[4681]: I1123 07:11:48.252696 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:11:48 crc kubenswrapper[4681]: E1123 07:11:48.253087 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:11:49 crc kubenswrapper[4681]: I1123 07:11:49.261496 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0f8d82-e774-42f2-b0a4-df7abb1ce348" path="/var/lib/kubelet/pods/1f0f8d82-e774-42f2-b0a4-df7abb1ce348/volumes" Nov 23 07:11:49 crc kubenswrapper[4681]: I1123 07:11:49.993868 4681 generic.go:334] "Generic (PLEG): container finished" podID="ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03" containerID="9505344e17390d02764273eae22254f857c54b15ef9fa62ea471a2cbb14c6bce" exitCode=0 Nov 23 07:11:49 crc kubenswrapper[4681]: I1123 07:11:49.993909 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" event={"ID":"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03","Type":"ContainerDied","Data":"9505344e17390d02764273eae22254f857c54b15ef9fa62ea471a2cbb14c6bce"} Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.328590 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.518803 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2xjj\" (UniqueName: \"kubernetes.io/projected/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-kube-api-access-c2xjj\") pod \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.518882 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-ssh-key\") pod \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.518924 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-inventory\") pod \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\" (UID: \"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03\") " Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.525088 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-kube-api-access-c2xjj" (OuterVolumeSpecName: "kube-api-access-c2xjj") pod "ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03" (UID: "ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03"). InnerVolumeSpecName "kube-api-access-c2xjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.542822 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03" (UID: "ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.544238 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-inventory" (OuterVolumeSpecName: "inventory") pod "ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03" (UID: "ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.621701 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.621951 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2xjj\" (UniqueName: \"kubernetes.io/projected/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-kube-api-access-c2xjj\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:51 crc kubenswrapper[4681]: I1123 07:11:51.622497 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.009414 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" event={"ID":"ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03","Type":"ContainerDied","Data":"ca744ff99eec2c569080430ee6b70e573ad7ec9a9afab87eafc7c739fee2b22b"} Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.009472 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca744ff99eec2c569080430ee6b70e573ad7ec9a9afab87eafc7c739fee2b22b" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.009548 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrvj8" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.091282 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6v8gc"] Nov 23 07:11:52 crc kubenswrapper[4681]: E1123 07:11:52.091860 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.091883 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.096941 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba83e52a-ab31-4d66-aaa3-ad7aea2e4f03" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.097731 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.100934 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.101025 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.101153 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.101272 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.110115 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6v8gc"] Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.240833 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.241139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.241259 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxmb\" (UniqueName: \"kubernetes.io/projected/bb45e838-97f2-4461-ad42-b33f895f0160-kube-api-access-xvxmb\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.342774 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.343822 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxmb\" (UniqueName: \"kubernetes.io/projected/bb45e838-97f2-4461-ad42-b33f895f0160-kube-api-access-xvxmb\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.344071 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.346552 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.346976 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.359520 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxmb\" (UniqueName: \"kubernetes.io/projected/bb45e838-97f2-4461-ad42-b33f895f0160-kube-api-access-xvxmb\") pod \"ssh-known-hosts-edpm-deployment-6v8gc\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.420004 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:11:52 crc kubenswrapper[4681]: I1123 07:11:52.883408 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6v8gc"] Nov 23 07:11:53 crc kubenswrapper[4681]: I1123 07:11:53.020269 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" event={"ID":"bb45e838-97f2-4461-ad42-b33f895f0160","Type":"ContainerStarted","Data":"03218c58663361245b5fa14cb46351a700f0af2892e4e6cd2020d83f71574793"} Nov 23 07:11:54 crc kubenswrapper[4681]: I1123 07:11:54.028517 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" event={"ID":"bb45e838-97f2-4461-ad42-b33f895f0160","Type":"ContainerStarted","Data":"58d0931385012ebfbdfeb55b5bef6570b934bf2597ca8c993cdfe9cf913e880c"} Nov 23 07:11:54 crc kubenswrapper[4681]: I1123 07:11:54.044061 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" podStartSLOduration=1.5424937349999999 podStartE2EDuration="2.044043829s" podCreationTimestamp="2025-11-23 07:11:52 +0000 UTC" firstStartedPulling="2025-11-23 07:11:52.895213141 +0000 UTC m=+1649.964722378" lastFinishedPulling="2025-11-23 07:11:53.396763235 +0000 UTC m=+1650.466272472" observedRunningTime="2025-11-23 07:11:54.043276472 +0000 UTC m=+1651.112785709" watchObservedRunningTime="2025-11-23 07:11:54.044043829 +0000 UTC m=+1651.113553066" Nov 23 07:11:58 crc kubenswrapper[4681]: E1123 07:11:58.742590 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb45e838_97f2_4461_ad42_b33f895f0160.slice/crio-58d0931385012ebfbdfeb55b5bef6570b934bf2597ca8c993cdfe9cf913e880c.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:11:59 crc kubenswrapper[4681]: I1123 07:11:59.065789 4681 generic.go:334] "Generic (PLEG): container finished" podID="bb45e838-97f2-4461-ad42-b33f895f0160" containerID="58d0931385012ebfbdfeb55b5bef6570b934bf2597ca8c993cdfe9cf913e880c" exitCode=0 Nov 23 07:11:59 crc kubenswrapper[4681]: I1123 07:11:59.065831 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" event={"ID":"bb45e838-97f2-4461-ad42-b33f895f0160","Type":"ContainerDied","Data":"58d0931385012ebfbdfeb55b5bef6570b934bf2597ca8c993cdfe9cf913e880c"} Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.384009 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.510235 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvxmb\" (UniqueName: \"kubernetes.io/projected/bb45e838-97f2-4461-ad42-b33f895f0160-kube-api-access-xvxmb\") pod \"bb45e838-97f2-4461-ad42-b33f895f0160\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.510319 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-inventory-0\") pod \"bb45e838-97f2-4461-ad42-b33f895f0160\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.510571 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-ssh-key-openstack-edpm-ipam\") pod \"bb45e838-97f2-4461-ad42-b33f895f0160\" (UID: \"bb45e838-97f2-4461-ad42-b33f895f0160\") " Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.515801 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb45e838-97f2-4461-ad42-b33f895f0160-kube-api-access-xvxmb" (OuterVolumeSpecName: "kube-api-access-xvxmb") pod "bb45e838-97f2-4461-ad42-b33f895f0160" (UID: "bb45e838-97f2-4461-ad42-b33f895f0160"). InnerVolumeSpecName "kube-api-access-xvxmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.531821 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bb45e838-97f2-4461-ad42-b33f895f0160" (UID: "bb45e838-97f2-4461-ad42-b33f895f0160"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.532417 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "bb45e838-97f2-4461-ad42-b33f895f0160" (UID: "bb45e838-97f2-4461-ad42-b33f895f0160"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.613812 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvxmb\" (UniqueName: \"kubernetes.io/projected/bb45e838-97f2-4461-ad42-b33f895f0160-kube-api-access-xvxmb\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.613844 4681 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:00 crc kubenswrapper[4681]: I1123 07:12:00.613857 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb45e838-97f2-4461-ad42-b33f895f0160-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.082769 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" event={"ID":"bb45e838-97f2-4461-ad42-b33f895f0160","Type":"ContainerDied","Data":"03218c58663361245b5fa14cb46351a700f0af2892e4e6cd2020d83f71574793"} Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.082815 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03218c58663361245b5fa14cb46351a700f0af2892e4e6cd2020d83f71574793" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.082874 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6v8gc" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.141106 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq"] Nov 23 07:12:01 crc kubenswrapper[4681]: E1123 07:12:01.141558 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb45e838-97f2-4461-ad42-b33f895f0160" containerName="ssh-known-hosts-edpm-deployment" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.141577 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb45e838-97f2-4461-ad42-b33f895f0160" containerName="ssh-known-hosts-edpm-deployment" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.141835 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb45e838-97f2-4461-ad42-b33f895f0160" containerName="ssh-known-hosts-edpm-deployment" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.142561 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.146104 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.146146 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.146441 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.146494 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.161127 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq"] Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.252144 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:12:01 crc kubenswrapper[4681]: E1123 07:12:01.252518 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.330413 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.330802 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd2j8\" (UniqueName: \"kubernetes.io/projected/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-kube-api-access-pd2j8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.331147 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.433012 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.433072 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.433095 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd2j8\" (UniqueName: \"kubernetes.io/projected/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-kube-api-access-pd2j8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.436894 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.440899 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.452442 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd2j8\" (UniqueName: \"kubernetes.io/projected/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-kube-api-access-pd2j8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ds8vq\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.458785 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:01 crc kubenswrapper[4681]: I1123 07:12:01.923084 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq"] Nov 23 07:12:02 crc kubenswrapper[4681]: I1123 07:12:02.091791 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" event={"ID":"a203e7b1-b35e-4d1c-b9ca-ce06653a2723","Type":"ContainerStarted","Data":"91adb5ba69b84e770123d3cfe98fbb5e60111502426b1b1ea617c58c81cc1fb6"} Nov 23 07:12:03 crc kubenswrapper[4681]: I1123 07:12:03.101999 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" event={"ID":"a203e7b1-b35e-4d1c-b9ca-ce06653a2723","Type":"ContainerStarted","Data":"23ddce268af383f68faf71f7c4d3c32c9549cb236cacb3e263fdcc75ebadb7ae"} Nov 23 07:12:03 crc kubenswrapper[4681]: I1123 07:12:03.126272 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" podStartSLOduration=1.6143555200000002 podStartE2EDuration="2.126247196s" podCreationTimestamp="2025-11-23 07:12:01 +0000 UTC" firstStartedPulling="2025-11-23 07:12:01.918919656 +0000 UTC m=+1658.988428893" lastFinishedPulling="2025-11-23 07:12:02.430811332 +0000 UTC m=+1659.500320569" observedRunningTime="2025-11-23 07:12:03.112656869 +0000 UTC m=+1660.182166106" watchObservedRunningTime="2025-11-23 07:12:03.126247196 +0000 UTC m=+1660.195756433" Nov 23 07:12:09 crc kubenswrapper[4681]: I1123 07:12:09.151495 4681 generic.go:334] "Generic (PLEG): container finished" podID="a203e7b1-b35e-4d1c-b9ca-ce06653a2723" containerID="23ddce268af383f68faf71f7c4d3c32c9549cb236cacb3e263fdcc75ebadb7ae" exitCode=0 Nov 23 07:12:09 crc kubenswrapper[4681]: I1123 07:12:09.151573 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" event={"ID":"a203e7b1-b35e-4d1c-b9ca-ce06653a2723","Type":"ContainerDied","Data":"23ddce268af383f68faf71f7c4d3c32c9549cb236cacb3e263fdcc75ebadb7ae"} Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.491741 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.519990 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-inventory\") pod \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.520237 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-ssh-key\") pod \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.520482 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd2j8\" (UniqueName: \"kubernetes.io/projected/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-kube-api-access-pd2j8\") pod \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\" (UID: \"a203e7b1-b35e-4d1c-b9ca-ce06653a2723\") " Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.535645 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-kube-api-access-pd2j8" (OuterVolumeSpecName: "kube-api-access-pd2j8") pod "a203e7b1-b35e-4d1c-b9ca-ce06653a2723" (UID: "a203e7b1-b35e-4d1c-b9ca-ce06653a2723"). InnerVolumeSpecName "kube-api-access-pd2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.545552 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-inventory" (OuterVolumeSpecName: "inventory") pod "a203e7b1-b35e-4d1c-b9ca-ce06653a2723" (UID: "a203e7b1-b35e-4d1c-b9ca-ce06653a2723"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.549638 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a203e7b1-b35e-4d1c-b9ca-ce06653a2723" (UID: "a203e7b1-b35e-4d1c-b9ca-ce06653a2723"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.622739 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.622770 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:10 crc kubenswrapper[4681]: I1123 07:12:10.622782 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd2j8\" (UniqueName: \"kubernetes.io/projected/a203e7b1-b35e-4d1c-b9ca-ce06653a2723-kube-api-access-pd2j8\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.172923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" event={"ID":"a203e7b1-b35e-4d1c-b9ca-ce06653a2723","Type":"ContainerDied","Data":"91adb5ba69b84e770123d3cfe98fbb5e60111502426b1b1ea617c58c81cc1fb6"} Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.172979 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91adb5ba69b84e770123d3cfe98fbb5e60111502426b1b1ea617c58c81cc1fb6" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.173046 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ds8vq" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.263200 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl"] Nov 23 07:12:11 crc kubenswrapper[4681]: E1123 07:12:11.263537 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a203e7b1-b35e-4d1c-b9ca-ce06653a2723" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.263555 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a203e7b1-b35e-4d1c-b9ca-ce06653a2723" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.263728 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a203e7b1-b35e-4d1c-b9ca-ce06653a2723" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.264256 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl"] Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.264331 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.266065 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.267967 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.268605 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.270947 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.437660 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.438006 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.438037 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vs9\" (UniqueName: \"kubernetes.io/projected/e5da210c-73e9-431c-a664-386b0b2ecfa6-kube-api-access-n7vs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.540085 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.540130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vs9\" (UniqueName: \"kubernetes.io/projected/e5da210c-73e9-431c-a664-386b0b2ecfa6-kube-api-access-n7vs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.540237 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.543636 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.543675 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.558480 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vs9\" (UniqueName: \"kubernetes.io/projected/e5da210c-73e9-431c-a664-386b0b2ecfa6-kube-api-access-n7vs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:11 crc kubenswrapper[4681]: I1123 07:12:11.580347 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:12 crc kubenswrapper[4681]: I1123 07:12:12.034269 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl"] Nov 23 07:12:12 crc kubenswrapper[4681]: I1123 07:12:12.192849 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" event={"ID":"e5da210c-73e9-431c-a664-386b0b2ecfa6","Type":"ContainerStarted","Data":"24303a029a58fe95d117330033f81fc27b1687b9d33cb9bcb99d633ed06b70de"} Nov 23 07:12:13 crc kubenswrapper[4681]: I1123 07:12:13.200491 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" event={"ID":"e5da210c-73e9-431c-a664-386b0b2ecfa6","Type":"ContainerStarted","Data":"78bfd47bc8ce0031364d7368697ecc9bb891a507be11ff25d157fc37cca6378e"} Nov 23 07:12:13 crc kubenswrapper[4681]: I1123 07:12:13.215249 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" podStartSLOduration=1.7317726420000001 podStartE2EDuration="2.215231525s" podCreationTimestamp="2025-11-23 07:12:11 +0000 UTC" firstStartedPulling="2025-11-23 07:12:12.036576021 +0000 UTC m=+1669.106085258" lastFinishedPulling="2025-11-23 07:12:12.520034914 +0000 UTC m=+1669.589544141" observedRunningTime="2025-11-23 07:12:13.211357453 +0000 UTC m=+1670.280866691" watchObservedRunningTime="2025-11-23 07:12:13.215231525 +0000 UTC m=+1670.284740763" Nov 23 07:12:14 crc kubenswrapper[4681]: I1123 07:12:14.251418 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:12:14 crc kubenswrapper[4681]: E1123 07:12:14.252005 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:12:20 crc kubenswrapper[4681]: I1123 07:12:20.249744 4681 generic.go:334] "Generic (PLEG): container finished" podID="e5da210c-73e9-431c-a664-386b0b2ecfa6" containerID="78bfd47bc8ce0031364d7368697ecc9bb891a507be11ff25d157fc37cca6378e" exitCode=0 Nov 23 07:12:20 crc kubenswrapper[4681]: I1123 07:12:20.249842 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" event={"ID":"e5da210c-73e9-431c-a664-386b0b2ecfa6","Type":"ContainerDied","Data":"78bfd47bc8ce0031364d7368697ecc9bb891a507be11ff25d157fc37cca6378e"} Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.580751 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.657179 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-inventory\") pod \"e5da210c-73e9-431c-a664-386b0b2ecfa6\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.657237 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7vs9\" (UniqueName: \"kubernetes.io/projected/e5da210c-73e9-431c-a664-386b0b2ecfa6-kube-api-access-n7vs9\") pod \"e5da210c-73e9-431c-a664-386b0b2ecfa6\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.657305 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-ssh-key\") pod \"e5da210c-73e9-431c-a664-386b0b2ecfa6\" (UID: \"e5da210c-73e9-431c-a664-386b0b2ecfa6\") " Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.666422 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5da210c-73e9-431c-a664-386b0b2ecfa6-kube-api-access-n7vs9" (OuterVolumeSpecName: "kube-api-access-n7vs9") pod "e5da210c-73e9-431c-a664-386b0b2ecfa6" (UID: "e5da210c-73e9-431c-a664-386b0b2ecfa6"). InnerVolumeSpecName "kube-api-access-n7vs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.682019 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5da210c-73e9-431c-a664-386b0b2ecfa6" (UID: "e5da210c-73e9-431c-a664-386b0b2ecfa6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.685343 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-inventory" (OuterVolumeSpecName: "inventory") pod "e5da210c-73e9-431c-a664-386b0b2ecfa6" (UID: "e5da210c-73e9-431c-a664-386b0b2ecfa6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.760130 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7vs9\" (UniqueName: \"kubernetes.io/projected/e5da210c-73e9-431c-a664-386b0b2ecfa6-kube-api-access-n7vs9\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.760161 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:21 crc kubenswrapper[4681]: I1123 07:12:21.760171 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5da210c-73e9-431c-a664-386b0b2ecfa6-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.268936 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" event={"ID":"e5da210c-73e9-431c-a664-386b0b2ecfa6","Type":"ContainerDied","Data":"24303a029a58fe95d117330033f81fc27b1687b9d33cb9bcb99d633ed06b70de"} Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.268995 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24303a029a58fe95d117330033f81fc27b1687b9d33cb9bcb99d633ed06b70de" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.269011 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgrgl" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.343910 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v"] Nov 23 07:12:22 crc kubenswrapper[4681]: E1123 07:12:22.344341 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5da210c-73e9-431c-a664-386b0b2ecfa6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.344360 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5da210c-73e9-431c-a664-386b0b2ecfa6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.344595 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5da210c-73e9-431c-a664-386b0b2ecfa6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.345200 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351139 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351139 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351375 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351422 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351441 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351526 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351561 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.351632 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.369550 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v"] Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372587 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372634 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372730 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372784 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372814 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372835 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372882 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372934 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.372989 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.373021 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.373050 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.373099 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.373133 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxkq\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-kube-api-access-grxkq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.373150 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.475497 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.475881 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.475969 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476012 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476050 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476120 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476159 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grxkq\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-kube-api-access-grxkq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476184 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476250 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476266 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476351 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476416 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476454 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.476489 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.482306 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.482413 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.482631 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.484678 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.485196 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.485296 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.485477 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.486204 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.487737 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.489816 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.492052 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.495015 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grxkq\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-kube-api-access-grxkq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.495036 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.496056 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jj96v\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:22 crc kubenswrapper[4681]: I1123 07:12:22.661174 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:23 crc kubenswrapper[4681]: I1123 07:12:23.145581 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v"] Nov 23 07:12:23 crc kubenswrapper[4681]: I1123 07:12:23.307366 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" event={"ID":"f1ca6eb9-17b6-4016-b586-58e171977e99","Type":"ContainerStarted","Data":"c353a1e4266aa05ba83e3d112b6acc3bd9b756126628fafc46e0fcb0f1b0dd61"} Nov 23 07:12:23 crc kubenswrapper[4681]: I1123 07:12:23.629953 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:12:24 crc kubenswrapper[4681]: I1123 07:12:24.317651 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" event={"ID":"f1ca6eb9-17b6-4016-b586-58e171977e99","Type":"ContainerStarted","Data":"3c9af9691eee8fa36f1e7c8f9c575de31167a244cb920585579b42f3b8c21d33"} Nov 23 07:12:28 crc kubenswrapper[4681]: I1123 07:12:28.252096 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:12:28 crc kubenswrapper[4681]: E1123 07:12:28.253723 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:12:33 crc kubenswrapper[4681]: I1123 07:12:33.028055 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" podStartSLOduration=10.552770756 podStartE2EDuration="11.028036406s" podCreationTimestamp="2025-11-23 07:12:22 +0000 UTC" firstStartedPulling="2025-11-23 07:12:23.152491699 +0000 UTC m=+1680.222000936" lastFinishedPulling="2025-11-23 07:12:23.627757348 +0000 UTC m=+1680.697266586" observedRunningTime="2025-11-23 07:12:24.33925695 +0000 UTC m=+1681.408766188" watchObservedRunningTime="2025-11-23 07:12:33.028036406 +0000 UTC m=+1690.097545643" Nov 23 07:12:33 crc kubenswrapper[4681]: I1123 07:12:33.029745 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-px75c"] Nov 23 07:12:33 crc kubenswrapper[4681]: I1123 07:12:33.035808 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-px75c"] Nov 23 07:12:33 crc kubenswrapper[4681]: I1123 07:12:33.267237 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5461efc5-e9c2-4a64-a74d-8db6df47c452" path="/var/lib/kubelet/pods/5461efc5-e9c2-4a64-a74d-8db6df47c452/volumes" Nov 23 07:12:42 crc kubenswrapper[4681]: I1123 07:12:42.252320 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:12:42 crc kubenswrapper[4681]: E1123 07:12:42.253065 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:12:45 crc kubenswrapper[4681]: I1123 07:12:45.935233 4681 scope.go:117] "RemoveContainer" containerID="78ebc7f0561d1808407ceae2ed83d51282c96af68e61a7baf50feba9c9957090" Nov 23 07:12:45 crc kubenswrapper[4681]: I1123 07:12:45.974374 4681 scope.go:117] "RemoveContainer" containerID="183b917c45087d9603a7ee2e288f12a5785273ec6893fe5851e96e98dbbba738" Nov 23 07:12:46 crc kubenswrapper[4681]: I1123 07:12:46.011193 4681 scope.go:117] "RemoveContainer" containerID="d8444ffdf1771f0db5e1c5e8e110496dc663776278c07287adcf74f78a448a9f" Nov 23 07:12:52 crc kubenswrapper[4681]: I1123 07:12:52.522638 4681 generic.go:334] "Generic (PLEG): container finished" podID="f1ca6eb9-17b6-4016-b586-58e171977e99" containerID="3c9af9691eee8fa36f1e7c8f9c575de31167a244cb920585579b42f3b8c21d33" exitCode=0 Nov 23 07:12:52 crc kubenswrapper[4681]: I1123 07:12:52.522723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" event={"ID":"f1ca6eb9-17b6-4016-b586-58e171977e99","Type":"ContainerDied","Data":"3c9af9691eee8fa36f1e7c8f9c575de31167a244cb920585579b42f3b8c21d33"} Nov 23 07:12:53 crc kubenswrapper[4681]: I1123 07:12:53.873452 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029146 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-repo-setup-combined-ca-bundle\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029199 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-nova-combined-ca-bundle\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029225 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-inventory\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029256 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-bootstrap-combined-ca-bundle\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029278 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029316 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029340 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-libvirt-combined-ca-bundle\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029369 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ssh-key\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029392 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ovn-combined-ca-bundle\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029415 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grxkq\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-kube-api-access-grxkq\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029447 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029499 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-telemetry-combined-ca-bundle\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029526 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-neutron-metadata-combined-ca-bundle\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.029556 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f1ca6eb9-17b6-4016-b586-58e171977e99\" (UID: \"f1ca6eb9-17b6-4016-b586-58e171977e99\") " Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.035353 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.035380 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.035751 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.037250 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.038077 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.038481 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.038553 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-kube-api-access-grxkq" (OuterVolumeSpecName: "kube-api-access-grxkq") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "kube-api-access-grxkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.039342 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.039526 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.041145 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.041798 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.042837 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.058094 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.060642 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-inventory" (OuterVolumeSpecName: "inventory") pod "f1ca6eb9-17b6-4016-b586-58e171977e99" (UID: "f1ca6eb9-17b6-4016-b586-58e171977e99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.131925 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132152 4681 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132164 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132174 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132317 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grxkq\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-kube-api-access-grxkq\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132328 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132338 4681 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132349 4681 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132358 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132366 4681 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132375 4681 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132383 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132391 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f1ca6eb9-17b6-4016-b586-58e171977e99-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.132400 4681 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ca6eb9-17b6-4016-b586-58e171977e99-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.539610 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" event={"ID":"f1ca6eb9-17b6-4016-b586-58e171977e99","Type":"ContainerDied","Data":"c353a1e4266aa05ba83e3d112b6acc3bd9b756126628fafc46e0fcb0f1b0dd61"} Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.539654 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c353a1e4266aa05ba83e3d112b6acc3bd9b756126628fafc46e0fcb0f1b0dd61" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.539660 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jj96v" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.635624 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg"] Nov 23 07:12:54 crc kubenswrapper[4681]: E1123 07:12:54.636286 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ca6eb9-17b6-4016-b586-58e171977e99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.636324 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ca6eb9-17b6-4016-b586-58e171977e99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.636626 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ca6eb9-17b6-4016-b586-58e171977e99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.637875 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.642563 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg"] Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.642691 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.642733 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.642688 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.642930 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.642941 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.744887 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.745121 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.745351 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.745424 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zg95\" (UniqueName: \"kubernetes.io/projected/185232c9-eee4-48ca-92bf-5bf0d3485853-kube-api-access-9zg95\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.745516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/185232c9-eee4-48ca-92bf-5bf0d3485853-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.847377 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.847503 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zg95\" (UniqueName: \"kubernetes.io/projected/185232c9-eee4-48ca-92bf-5bf0d3485853-kube-api-access-9zg95\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.847572 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/185232c9-eee4-48ca-92bf-5bf0d3485853-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.847636 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.847709 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.848668 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/185232c9-eee4-48ca-92bf-5bf0d3485853-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.851660 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.852056 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.853199 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.866724 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zg95\" (UniqueName: \"kubernetes.io/projected/185232c9-eee4-48ca-92bf-5bf0d3485853-kube-api-access-9zg95\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5hmdg\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:54 crc kubenswrapper[4681]: I1123 07:12:54.956011 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:12:55 crc kubenswrapper[4681]: I1123 07:12:55.252266 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:12:55 crc kubenswrapper[4681]: E1123 07:12:55.252765 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:12:55 crc kubenswrapper[4681]: I1123 07:12:55.398171 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg"] Nov 23 07:12:55 crc kubenswrapper[4681]: I1123 07:12:55.551037 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" event={"ID":"185232c9-eee4-48ca-92bf-5bf0d3485853","Type":"ContainerStarted","Data":"dcfa437113ac7bc10cfbfc11657b5ed39f7e51dc413d16cd4a50536ceef2fa4e"} Nov 23 07:12:56 crc kubenswrapper[4681]: I1123 07:12:56.559680 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" event={"ID":"185232c9-eee4-48ca-92bf-5bf0d3485853","Type":"ContainerStarted","Data":"ddf17ad483329fb686cd8716493e3f44a7afccbeb49e2eef9e908f672353b13b"} Nov 23 07:12:56 crc kubenswrapper[4681]: I1123 07:12:56.573810 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" podStartSLOduration=2.007152943 podStartE2EDuration="2.573795212s" podCreationTimestamp="2025-11-23 07:12:54 +0000 UTC" firstStartedPulling="2025-11-23 07:12:55.405172293 +0000 UTC m=+1712.474681529" lastFinishedPulling="2025-11-23 07:12:55.97181456 +0000 UTC m=+1713.041323798" observedRunningTime="2025-11-23 07:12:56.572423023 +0000 UTC m=+1713.641932259" watchObservedRunningTime="2025-11-23 07:12:56.573795212 +0000 UTC m=+1713.643304448" Nov 23 07:13:06 crc kubenswrapper[4681]: I1123 07:13:06.252521 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:13:06 crc kubenswrapper[4681]: E1123 07:13:06.253519 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:13:19 crc kubenswrapper[4681]: I1123 07:13:19.251969 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:13:19 crc kubenswrapper[4681]: E1123 07:13:19.252963 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:13:34 crc kubenswrapper[4681]: I1123 07:13:34.252182 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:13:34 crc kubenswrapper[4681]: E1123 07:13:34.253176 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:13:43 crc kubenswrapper[4681]: I1123 07:13:43.931926 4681 generic.go:334] "Generic (PLEG): container finished" podID="185232c9-eee4-48ca-92bf-5bf0d3485853" containerID="ddf17ad483329fb686cd8716493e3f44a7afccbeb49e2eef9e908f672353b13b" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[4681]: I1123 07:13:43.932002 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" event={"ID":"185232c9-eee4-48ca-92bf-5bf0d3485853","Type":"ContainerDied","Data":"ddf17ad483329fb686cd8716493e3f44a7afccbeb49e2eef9e908f672353b13b"} Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.254706 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.289377 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ssh-key\") pod \"185232c9-eee4-48ca-92bf-5bf0d3485853\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.289419 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-inventory\") pod \"185232c9-eee4-48ca-92bf-5bf0d3485853\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.289473 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zg95\" (UniqueName: \"kubernetes.io/projected/185232c9-eee4-48ca-92bf-5bf0d3485853-kube-api-access-9zg95\") pod \"185232c9-eee4-48ca-92bf-5bf0d3485853\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.289503 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/185232c9-eee4-48ca-92bf-5bf0d3485853-ovncontroller-config-0\") pod \"185232c9-eee4-48ca-92bf-5bf0d3485853\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.289589 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ovn-combined-ca-bundle\") pod \"185232c9-eee4-48ca-92bf-5bf0d3485853\" (UID: \"185232c9-eee4-48ca-92bf-5bf0d3485853\") " Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.293525 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "185232c9-eee4-48ca-92bf-5bf0d3485853" (UID: "185232c9-eee4-48ca-92bf-5bf0d3485853"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.307879 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185232c9-eee4-48ca-92bf-5bf0d3485853-kube-api-access-9zg95" (OuterVolumeSpecName: "kube-api-access-9zg95") pod "185232c9-eee4-48ca-92bf-5bf0d3485853" (UID: "185232c9-eee4-48ca-92bf-5bf0d3485853"). InnerVolumeSpecName "kube-api-access-9zg95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.312026 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185232c9-eee4-48ca-92bf-5bf0d3485853-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "185232c9-eee4-48ca-92bf-5bf0d3485853" (UID: "185232c9-eee4-48ca-92bf-5bf0d3485853"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.315154 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-inventory" (OuterVolumeSpecName: "inventory") pod "185232c9-eee4-48ca-92bf-5bf0d3485853" (UID: "185232c9-eee4-48ca-92bf-5bf0d3485853"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.322482 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "185232c9-eee4-48ca-92bf-5bf0d3485853" (UID: "185232c9-eee4-48ca-92bf-5bf0d3485853"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.391443 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zg95\" (UniqueName: \"kubernetes.io/projected/185232c9-eee4-48ca-92bf-5bf0d3485853-kube-api-access-9zg95\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.391480 4681 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/185232c9-eee4-48ca-92bf-5bf0d3485853-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.391490 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.391499 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.391507 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/185232c9-eee4-48ca-92bf-5bf0d3485853-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.944767 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" event={"ID":"185232c9-eee4-48ca-92bf-5bf0d3485853","Type":"ContainerDied","Data":"dcfa437113ac7bc10cfbfc11657b5ed39f7e51dc413d16cd4a50536ceef2fa4e"} Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.944968 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfa437113ac7bc10cfbfc11657b5ed39f7e51dc413d16cd4a50536ceef2fa4e" Nov 23 07:13:45 crc kubenswrapper[4681]: I1123 07:13:45.944821 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5hmdg" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.025481 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth"] Nov 23 07:13:46 crc kubenswrapper[4681]: E1123 07:13:46.025837 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185232c9-eee4-48ca-92bf-5bf0d3485853" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.025854 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="185232c9-eee4-48ca-92bf-5bf0d3485853" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.026174 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="185232c9-eee4-48ca-92bf-5bf0d3485853" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.027017 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.028279 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.029370 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.029601 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.029717 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.030521 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.030692 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.048533 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth"] Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.101612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.101677 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.101698 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn86t\" (UniqueName: \"kubernetes.io/projected/1adfee73-085f-4196-9b2d-e0a8e5d8a571-kube-api-access-zn86t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.101728 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.101787 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.101824 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.203326 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.203408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.203448 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.203507 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.203559 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.203577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn86t\" (UniqueName: \"kubernetes.io/projected/1adfee73-085f-4196-9b2d-e0a8e5d8a571-kube-api-access-zn86t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.206311 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.206321 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.206781 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.207099 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.207337 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.216449 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn86t\" (UniqueName: \"kubernetes.io/projected/1adfee73-085f-4196-9b2d-e0a8e5d8a571-kube-api-access-zn86t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.340598 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.797509 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth"] Nov 23 07:13:46 crc kubenswrapper[4681]: I1123 07:13:46.951505 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" event={"ID":"1adfee73-085f-4196-9b2d-e0a8e5d8a571","Type":"ContainerStarted","Data":"a5bc354235305fd151ed1a582d01ba7236805d15e83e753cf260241121e24090"} Nov 23 07:13:47 crc kubenswrapper[4681]: I1123 07:13:47.959111 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" event={"ID":"1adfee73-085f-4196-9b2d-e0a8e5d8a571","Type":"ContainerStarted","Data":"17c436a2c4086e99c8a487981f5026854ebb77ed663957a3a5b9f6ac9d71f649"} Nov 23 07:13:47 crc kubenswrapper[4681]: I1123 07:13:47.972829 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" podStartSLOduration=1.473488209 podStartE2EDuration="1.972812907s" podCreationTimestamp="2025-11-23 07:13:46 +0000 UTC" firstStartedPulling="2025-11-23 07:13:46.792352798 +0000 UTC m=+1763.861862035" lastFinishedPulling="2025-11-23 07:13:47.291677496 +0000 UTC m=+1764.361186733" observedRunningTime="2025-11-23 07:13:47.970629018 +0000 UTC m=+1765.040138254" watchObservedRunningTime="2025-11-23 07:13:47.972812907 +0000 UTC m=+1765.042322145" Nov 23 07:13:49 crc kubenswrapper[4681]: I1123 07:13:49.251646 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:13:49 crc kubenswrapper[4681]: E1123 07:13:49.252574 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:14:02 crc kubenswrapper[4681]: I1123 07:14:02.251742 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:14:02 crc kubenswrapper[4681]: E1123 07:14:02.253715 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:14:16 crc kubenswrapper[4681]: I1123 07:14:16.252148 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:14:16 crc kubenswrapper[4681]: E1123 07:14:16.252936 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:14:24 crc kubenswrapper[4681]: I1123 07:14:24.252338 4681 generic.go:334] "Generic (PLEG): container finished" podID="1adfee73-085f-4196-9b2d-e0a8e5d8a571" containerID="17c436a2c4086e99c8a487981f5026854ebb77ed663957a3a5b9f6ac9d71f649" exitCode=0 Nov 23 07:14:24 crc kubenswrapper[4681]: I1123 07:14:24.252402 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" event={"ID":"1adfee73-085f-4196-9b2d-e0a8e5d8a571","Type":"ContainerDied","Data":"17c436a2c4086e99c8a487981f5026854ebb77ed663957a3a5b9f6ac9d71f649"} Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.575041 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.621698 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-ssh-key\") pod \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.621862 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-nova-metadata-neutron-config-0\") pod \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.621893 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-inventory\") pod \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.621962 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-metadata-combined-ca-bundle\") pod \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.622030 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-ovn-metadata-agent-neutron-config-0\") pod \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.622125 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn86t\" (UniqueName: \"kubernetes.io/projected/1adfee73-085f-4196-9b2d-e0a8e5d8a571-kube-api-access-zn86t\") pod \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\" (UID: \"1adfee73-085f-4196-9b2d-e0a8e5d8a571\") " Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.633606 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1adfee73-085f-4196-9b2d-e0a8e5d8a571" (UID: "1adfee73-085f-4196-9b2d-e0a8e5d8a571"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.640585 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1adfee73-085f-4196-9b2d-e0a8e5d8a571-kube-api-access-zn86t" (OuterVolumeSpecName: "kube-api-access-zn86t") pod "1adfee73-085f-4196-9b2d-e0a8e5d8a571" (UID: "1adfee73-085f-4196-9b2d-e0a8e5d8a571"). InnerVolumeSpecName "kube-api-access-zn86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.645775 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "1adfee73-085f-4196-9b2d-e0a8e5d8a571" (UID: "1adfee73-085f-4196-9b2d-e0a8e5d8a571"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.648749 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-inventory" (OuterVolumeSpecName: "inventory") pod "1adfee73-085f-4196-9b2d-e0a8e5d8a571" (UID: "1adfee73-085f-4196-9b2d-e0a8e5d8a571"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.651742 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1adfee73-085f-4196-9b2d-e0a8e5d8a571" (UID: "1adfee73-085f-4196-9b2d-e0a8e5d8a571"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.656635 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "1adfee73-085f-4196-9b2d-e0a8e5d8a571" (UID: "1adfee73-085f-4196-9b2d-e0a8e5d8a571"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.723955 4681 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.723981 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.723992 4681 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.724002 4681 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.724013 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn86t\" (UniqueName: \"kubernetes.io/projected/1adfee73-085f-4196-9b2d-e0a8e5d8a571-kube-api-access-zn86t\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:25 crc kubenswrapper[4681]: I1123 07:14:25.724021 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1adfee73-085f-4196-9b2d-e0a8e5d8a571-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.267846 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" event={"ID":"1adfee73-085f-4196-9b2d-e0a8e5d8a571","Type":"ContainerDied","Data":"a5bc354235305fd151ed1a582d01ba7236805d15e83e753cf260241121e24090"} Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.267885 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5bc354235305fd151ed1a582d01ba7236805d15e83e753cf260241121e24090" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.267950 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-57cth" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.352084 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77"] Nov 23 07:14:26 crc kubenswrapper[4681]: E1123 07:14:26.352489 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adfee73-085f-4196-9b2d-e0a8e5d8a571" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.352510 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adfee73-085f-4196-9b2d-e0a8e5d8a571" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.352712 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adfee73-085f-4196-9b2d-e0a8e5d8a571" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.353352 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.357037 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.357039 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.357069 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.357186 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.358398 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.367345 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77"] Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.437044 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.437093 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsd7f\" (UniqueName: \"kubernetes.io/projected/8935c375-c36c-44cd-b318-52dab1b3e938-kube-api-access-xsd7f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.437159 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.437219 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.437273 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.538644 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.538685 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsd7f\" (UniqueName: \"kubernetes.io/projected/8935c375-c36c-44cd-b318-52dab1b3e938-kube-api-access-xsd7f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.538743 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.538798 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.538840 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.543537 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.544515 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.549911 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.552271 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.555041 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsd7f\" (UniqueName: \"kubernetes.io/projected/8935c375-c36c-44cd-b318-52dab1b3e938-kube-api-access-xsd7f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-klw77\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:26 crc kubenswrapper[4681]: I1123 07:14:26.679517 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:14:27 crc kubenswrapper[4681]: I1123 07:14:27.157098 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77"] Nov 23 07:14:27 crc kubenswrapper[4681]: I1123 07:14:27.157872 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:14:27 crc kubenswrapper[4681]: I1123 07:14:27.279791 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" event={"ID":"8935c375-c36c-44cd-b318-52dab1b3e938","Type":"ContainerStarted","Data":"d4cb9b4738a5a2ea7970357b94a1a0731a4cceef99a717250ca57042f959aefd"} Nov 23 07:14:28 crc kubenswrapper[4681]: I1123 07:14:28.289638 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" event={"ID":"8935c375-c36c-44cd-b318-52dab1b3e938","Type":"ContainerStarted","Data":"80c915b68252e31f06df118b98fb0390a8f4aef14f7f4d048dc65b31bd372905"} Nov 23 07:14:28 crc kubenswrapper[4681]: I1123 07:14:28.310523 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" podStartSLOduration=1.761868005 podStartE2EDuration="2.310507672s" podCreationTimestamp="2025-11-23 07:14:26 +0000 UTC" firstStartedPulling="2025-11-23 07:14:27.157660642 +0000 UTC m=+1804.227169880" lastFinishedPulling="2025-11-23 07:14:27.70630031 +0000 UTC m=+1804.775809547" observedRunningTime="2025-11-23 07:14:28.309280086 +0000 UTC m=+1805.378789323" watchObservedRunningTime="2025-11-23 07:14:28.310507672 +0000 UTC m=+1805.380016908" Nov 23 07:14:30 crc kubenswrapper[4681]: I1123 07:14:30.251967 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:14:30 crc kubenswrapper[4681]: E1123 07:14:30.252483 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:14:44 crc kubenswrapper[4681]: I1123 07:14:44.252829 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:14:45 crc kubenswrapper[4681]: I1123 07:14:45.426570 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"f8da7449317d4fedfd4d71fd5add670aef44436e65aa268d710d4cbf78c73d83"} Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.140752 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd"] Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.142550 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.147184 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.147640 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.150417 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd"] Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.202891 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ms5\" (UniqueName: \"kubernetes.io/projected/37bcdd31-b53b-4450-9d03-3ff00ed926f7-kube-api-access-n9ms5\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.202968 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcdd31-b53b-4450-9d03-3ff00ed926f7-secret-volume\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.202991 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcdd31-b53b-4450-9d03-3ff00ed926f7-config-volume\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.304005 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9ms5\" (UniqueName: \"kubernetes.io/projected/37bcdd31-b53b-4450-9d03-3ff00ed926f7-kube-api-access-n9ms5\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.304057 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcdd31-b53b-4450-9d03-3ff00ed926f7-secret-volume\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.304072 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcdd31-b53b-4450-9d03-3ff00ed926f7-config-volume\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.305259 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcdd31-b53b-4450-9d03-3ff00ed926f7-config-volume\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.309625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcdd31-b53b-4450-9d03-3ff00ed926f7-secret-volume\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.317192 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9ms5\" (UniqueName: \"kubernetes.io/projected/37bcdd31-b53b-4450-9d03-3ff00ed926f7-kube-api-access-n9ms5\") pod \"collect-profiles-29398035-g8rvd\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.456416 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:00 crc kubenswrapper[4681]: I1123 07:15:00.860627 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd"] Nov 23 07:15:00 crc kubenswrapper[4681]: W1123 07:15:00.862563 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37bcdd31_b53b_4450_9d03_3ff00ed926f7.slice/crio-fbca723b87070cea0685debba96473ad1618c9e24bd12b60563eed0029e973cf WatchSource:0}: Error finding container fbca723b87070cea0685debba96473ad1618c9e24bd12b60563eed0029e973cf: Status 404 returned error can't find the container with id fbca723b87070cea0685debba96473ad1618c9e24bd12b60563eed0029e973cf Nov 23 07:15:01 crc kubenswrapper[4681]: I1123 07:15:01.566071 4681 generic.go:334] "Generic (PLEG): container finished" podID="37bcdd31-b53b-4450-9d03-3ff00ed926f7" containerID="90cd91c064fd86bafa8c6a5225439ab396dcd9188adefcef8c8b3b5feb42594f" exitCode=0 Nov 23 07:15:01 crc kubenswrapper[4681]: I1123 07:15:01.566173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" event={"ID":"37bcdd31-b53b-4450-9d03-3ff00ed926f7","Type":"ContainerDied","Data":"90cd91c064fd86bafa8c6a5225439ab396dcd9188adefcef8c8b3b5feb42594f"} Nov 23 07:15:01 crc kubenswrapper[4681]: I1123 07:15:01.566401 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" event={"ID":"37bcdd31-b53b-4450-9d03-3ff00ed926f7","Type":"ContainerStarted","Data":"fbca723b87070cea0685debba96473ad1618c9e24bd12b60563eed0029e973cf"} Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.820400 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.946712 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcdd31-b53b-4450-9d03-3ff00ed926f7-config-volume\") pod \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.946893 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9ms5\" (UniqueName: \"kubernetes.io/projected/37bcdd31-b53b-4450-9d03-3ff00ed926f7-kube-api-access-n9ms5\") pod \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.947028 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcdd31-b53b-4450-9d03-3ff00ed926f7-secret-volume\") pod \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\" (UID: \"37bcdd31-b53b-4450-9d03-3ff00ed926f7\") " Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.947411 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37bcdd31-b53b-4450-9d03-3ff00ed926f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "37bcdd31-b53b-4450-9d03-3ff00ed926f7" (UID: "37bcdd31-b53b-4450-9d03-3ff00ed926f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.948548 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcdd31-b53b-4450-9d03-3ff00ed926f7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.953252 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37bcdd31-b53b-4450-9d03-3ff00ed926f7-kube-api-access-n9ms5" (OuterVolumeSpecName: "kube-api-access-n9ms5") pod "37bcdd31-b53b-4450-9d03-3ff00ed926f7" (UID: "37bcdd31-b53b-4450-9d03-3ff00ed926f7"). InnerVolumeSpecName "kube-api-access-n9ms5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:15:02 crc kubenswrapper[4681]: I1123 07:15:02.953523 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37bcdd31-b53b-4450-9d03-3ff00ed926f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37bcdd31-b53b-4450-9d03-3ff00ed926f7" (UID: "37bcdd31-b53b-4450-9d03-3ff00ed926f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:15:03 crc kubenswrapper[4681]: I1123 07:15:03.049925 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9ms5\" (UniqueName: \"kubernetes.io/projected/37bcdd31-b53b-4450-9d03-3ff00ed926f7-kube-api-access-n9ms5\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:03 crc kubenswrapper[4681]: I1123 07:15:03.049954 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcdd31-b53b-4450-9d03-3ff00ed926f7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:03 crc kubenswrapper[4681]: I1123 07:15:03.582686 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" event={"ID":"37bcdd31-b53b-4450-9d03-3ff00ed926f7","Type":"ContainerDied","Data":"fbca723b87070cea0685debba96473ad1618c9e24bd12b60563eed0029e973cf"} Nov 23 07:15:03 crc kubenswrapper[4681]: I1123 07:15:03.583014 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbca723b87070cea0685debba96473ad1618c9e24bd12b60563eed0029e973cf" Nov 23 07:15:03 crc kubenswrapper[4681]: I1123 07:15:03.582739 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd" Nov 23 07:15:34 crc kubenswrapper[4681]: I1123 07:15:34.907673 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" podUID="91ec0b0d-3fb3-4710-8be4-acb8bb895d42" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 23 07:17:12 crc kubenswrapper[4681]: I1123 07:17:12.295795 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:17:12 crc kubenswrapper[4681]: I1123 07:17:12.297238 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:17:32 crc kubenswrapper[4681]: I1123 07:17:32.684075 4681 generic.go:334] "Generic (PLEG): container finished" podID="8935c375-c36c-44cd-b318-52dab1b3e938" containerID="80c915b68252e31f06df118b98fb0390a8f4aef14f7f4d048dc65b31bd372905" exitCode=0 Nov 23 07:17:32 crc kubenswrapper[4681]: I1123 07:17:32.684165 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" event={"ID":"8935c375-c36c-44cd-b318-52dab1b3e938","Type":"ContainerDied","Data":"80c915b68252e31f06df118b98fb0390a8f4aef14f7f4d048dc65b31bd372905"} Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.000885 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.075312 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsd7f\" (UniqueName: \"kubernetes.io/projected/8935c375-c36c-44cd-b318-52dab1b3e938-kube-api-access-xsd7f\") pod \"8935c375-c36c-44cd-b318-52dab1b3e938\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.075358 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-combined-ca-bundle\") pod \"8935c375-c36c-44cd-b318-52dab1b3e938\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.075397 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-inventory\") pod \"8935c375-c36c-44cd-b318-52dab1b3e938\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.075518 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-secret-0\") pod \"8935c375-c36c-44cd-b318-52dab1b3e938\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.075600 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-ssh-key\") pod \"8935c375-c36c-44cd-b318-52dab1b3e938\" (UID: \"8935c375-c36c-44cd-b318-52dab1b3e938\") " Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.079577 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8935c375-c36c-44cd-b318-52dab1b3e938" (UID: "8935c375-c36c-44cd-b318-52dab1b3e938"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.079617 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8935c375-c36c-44cd-b318-52dab1b3e938-kube-api-access-xsd7f" (OuterVolumeSpecName: "kube-api-access-xsd7f") pod "8935c375-c36c-44cd-b318-52dab1b3e938" (UID: "8935c375-c36c-44cd-b318-52dab1b3e938"). InnerVolumeSpecName "kube-api-access-xsd7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.096163 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8935c375-c36c-44cd-b318-52dab1b3e938" (UID: "8935c375-c36c-44cd-b318-52dab1b3e938"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.096818 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-inventory" (OuterVolumeSpecName: "inventory") pod "8935c375-c36c-44cd-b318-52dab1b3e938" (UID: "8935c375-c36c-44cd-b318-52dab1b3e938"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.097637 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8935c375-c36c-44cd-b318-52dab1b3e938" (UID: "8935c375-c36c-44cd-b318-52dab1b3e938"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.177149 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsd7f\" (UniqueName: \"kubernetes.io/projected/8935c375-c36c-44cd-b318-52dab1b3e938-kube-api-access-xsd7f\") on node \"crc\" DevicePath \"\"" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.177178 4681 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.177191 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.177201 4681 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.177209 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8935c375-c36c-44cd-b318-52dab1b3e938-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.699150 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" event={"ID":"8935c375-c36c-44cd-b318-52dab1b3e938","Type":"ContainerDied","Data":"d4cb9b4738a5a2ea7970357b94a1a0731a4cceef99a717250ca57042f959aefd"} Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.699731 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cb9b4738a5a2ea7970357b94a1a0731a4cceef99a717250ca57042f959aefd" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.699421 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-klw77" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.786304 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq"] Nov 23 07:17:34 crc kubenswrapper[4681]: E1123 07:17:34.787040 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bcdd31-b53b-4450-9d03-3ff00ed926f7" containerName="collect-profiles" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.787066 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bcdd31-b53b-4450-9d03-3ff00ed926f7" containerName="collect-profiles" Nov 23 07:17:34 crc kubenswrapper[4681]: E1123 07:17:34.787101 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8935c375-c36c-44cd-b318-52dab1b3e938" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.787112 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8935c375-c36c-44cd-b318-52dab1b3e938" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.787382 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8935c375-c36c-44cd-b318-52dab1b3e938" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.787451 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bcdd31-b53b-4450-9d03-3ff00ed926f7" containerName="collect-profiles" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.788781 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.790667 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.793499 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.793675 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.793805 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.793984 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.794110 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.794236 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.797863 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq"] Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.890776 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.890836 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.890867 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.890899 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.890920 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.891010 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4z6\" (UniqueName: \"kubernetes.io/projected/ce2476fd-41d6-4382-82ff-bee6fe90f88c-kube-api-access-px4z6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.891047 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.891077 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.891136 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993373 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993435 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993481 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993512 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993530 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993565 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4z6\" (UniqueName: \"kubernetes.io/projected/ce2476fd-41d6-4382-82ff-bee6fe90f88c-kube-api-access-px4z6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993591 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993614 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.993654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.994578 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.997425 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.997681 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.997942 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:34 crc kubenswrapper[4681]: I1123 07:17:34.999074 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:35 crc kubenswrapper[4681]: I1123 07:17:34.999975 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:35 crc kubenswrapper[4681]: I1123 07:17:35.000171 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:35 crc kubenswrapper[4681]: I1123 07:17:35.000331 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:35 crc kubenswrapper[4681]: I1123 07:17:35.009833 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px4z6\" (UniqueName: \"kubernetes.io/projected/ce2476fd-41d6-4382-82ff-bee6fe90f88c-kube-api-access-px4z6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6j7xq\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:35 crc kubenswrapper[4681]: I1123 07:17:35.120089 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:17:35 crc kubenswrapper[4681]: I1123 07:17:35.595861 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq"] Nov 23 07:17:35 crc kubenswrapper[4681]: I1123 07:17:35.710811 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" event={"ID":"ce2476fd-41d6-4382-82ff-bee6fe90f88c","Type":"ContainerStarted","Data":"af22c1aa5c4698c3fefe64c9fd88d814d8823d1b77532eda06253099d3ff056e"} Nov 23 07:17:36 crc kubenswrapper[4681]: I1123 07:17:36.719403 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" event={"ID":"ce2476fd-41d6-4382-82ff-bee6fe90f88c","Type":"ContainerStarted","Data":"7f3b4d10ce1b5423a237d0653a60f2c551284ddd977c595795e23e67d48b1d38"} Nov 23 07:17:36 crc kubenswrapper[4681]: I1123 07:17:36.756368 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" podStartSLOduration=2.19882402 podStartE2EDuration="2.756347594s" podCreationTimestamp="2025-11-23 07:17:34 +0000 UTC" firstStartedPulling="2025-11-23 07:17:35.602645182 +0000 UTC m=+1992.672154419" lastFinishedPulling="2025-11-23 07:17:36.160168756 +0000 UTC m=+1993.229677993" observedRunningTime="2025-11-23 07:17:36.745397294 +0000 UTC m=+1993.814906530" watchObservedRunningTime="2025-11-23 07:17:36.756347594 +0000 UTC m=+1993.825856831" Nov 23 07:17:42 crc kubenswrapper[4681]: I1123 07:17:42.296780 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:17:42 crc kubenswrapper[4681]: I1123 07:17:42.297342 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:18:12 crc kubenswrapper[4681]: I1123 07:18:12.295508 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:18:12 crc kubenswrapper[4681]: I1123 07:18:12.296245 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:18:12 crc kubenswrapper[4681]: I1123 07:18:12.296402 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:18:12 crc kubenswrapper[4681]: I1123 07:18:12.298166 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8da7449317d4fedfd4d71fd5add670aef44436e65aa268d710d4cbf78c73d83"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:18:12 crc kubenswrapper[4681]: I1123 07:18:12.298296 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://f8da7449317d4fedfd4d71fd5add670aef44436e65aa268d710d4cbf78c73d83" gracePeriod=600 Nov 23 07:18:13 crc kubenswrapper[4681]: I1123 07:18:13.049200 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="f8da7449317d4fedfd4d71fd5add670aef44436e65aa268d710d4cbf78c73d83" exitCode=0 Nov 23 07:18:13 crc kubenswrapper[4681]: I1123 07:18:13.049284 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"f8da7449317d4fedfd4d71fd5add670aef44436e65aa268d710d4cbf78c73d83"} Nov 23 07:18:13 crc kubenswrapper[4681]: I1123 07:18:13.049859 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935"} Nov 23 07:18:13 crc kubenswrapper[4681]: I1123 07:18:13.049889 4681 scope.go:117] "RemoveContainer" containerID="a5380963080b6fe6bf2216624264d97b2ea5554bfe17e9b170d2c2b9f9ced66c" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.570074 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kcsrv"] Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.572829 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.643215 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcsrv"] Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.743751 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtf54\" (UniqueName: \"kubernetes.io/projected/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-kube-api-access-mtf54\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.743902 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-catalog-content\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.743991 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-utilities\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.845536 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtf54\" (UniqueName: \"kubernetes.io/projected/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-kube-api-access-mtf54\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.845624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-catalog-content\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.846095 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-catalog-content\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.845659 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-utilities\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.846124 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-utilities\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.866183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtf54\" (UniqueName: \"kubernetes.io/projected/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-kube-api-access-mtf54\") pod \"certified-operators-kcsrv\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:14 crc kubenswrapper[4681]: I1123 07:18:14.893942 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:15 crc kubenswrapper[4681]: I1123 07:18:15.409520 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcsrv"] Nov 23 07:18:15 crc kubenswrapper[4681]: W1123 07:18:15.417499 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod502f48c2_be11_4168_b9cf_8d7d9d8b3eb1.slice/crio-4e5b51d9ace221eb9ad0a0d9a580622ae171664f1a49e5d85f3865bda06f30d9 WatchSource:0}: Error finding container 4e5b51d9ace221eb9ad0a0d9a580622ae171664f1a49e5d85f3865bda06f30d9: Status 404 returned error can't find the container with id 4e5b51d9ace221eb9ad0a0d9a580622ae171664f1a49e5d85f3865bda06f30d9 Nov 23 07:18:16 crc kubenswrapper[4681]: I1123 07:18:16.086653 4681 generic.go:334] "Generic (PLEG): container finished" podID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerID="b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553" exitCode=0 Nov 23 07:18:16 crc kubenswrapper[4681]: I1123 07:18:16.087100 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcsrv" event={"ID":"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1","Type":"ContainerDied","Data":"b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553"} Nov 23 07:18:16 crc kubenswrapper[4681]: I1123 07:18:16.087146 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcsrv" event={"ID":"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1","Type":"ContainerStarted","Data":"4e5b51d9ace221eb9ad0a0d9a580622ae171664f1a49e5d85f3865bda06f30d9"} Nov 23 07:18:17 crc kubenswrapper[4681]: I1123 07:18:17.098382 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcsrv" event={"ID":"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1","Type":"ContainerStarted","Data":"9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0"} Nov 23 07:18:18 crc kubenswrapper[4681]: I1123 07:18:18.110778 4681 generic.go:334] "Generic (PLEG): container finished" podID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerID="9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0" exitCode=0 Nov 23 07:18:18 crc kubenswrapper[4681]: I1123 07:18:18.110895 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcsrv" event={"ID":"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1","Type":"ContainerDied","Data":"9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0"} Nov 23 07:18:19 crc kubenswrapper[4681]: I1123 07:18:19.124552 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcsrv" event={"ID":"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1","Type":"ContainerStarted","Data":"913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08"} Nov 23 07:18:19 crc kubenswrapper[4681]: I1123 07:18:19.151427 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kcsrv" podStartSLOduration=2.550378285 podStartE2EDuration="5.151394482s" podCreationTimestamp="2025-11-23 07:18:14 +0000 UTC" firstStartedPulling="2025-11-23 07:18:16.089129418 +0000 UTC m=+2033.158638655" lastFinishedPulling="2025-11-23 07:18:18.690145615 +0000 UTC m=+2035.759654852" observedRunningTime="2025-11-23 07:18:19.139134572 +0000 UTC m=+2036.208643798" watchObservedRunningTime="2025-11-23 07:18:19.151394482 +0000 UTC m=+2036.220903719" Nov 23 07:18:24 crc kubenswrapper[4681]: I1123 07:18:24.894988 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:24 crc kubenswrapper[4681]: I1123 07:18:24.895729 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:24 crc kubenswrapper[4681]: I1123 07:18:24.948989 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:25 crc kubenswrapper[4681]: I1123 07:18:25.212001 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:25 crc kubenswrapper[4681]: I1123 07:18:25.289616 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcsrv"] Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.192432 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kcsrv" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="registry-server" containerID="cri-o://913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08" gracePeriod=2 Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.618706 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.645156 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-catalog-content\") pod \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.645269 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtf54\" (UniqueName: \"kubernetes.io/projected/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-kube-api-access-mtf54\") pod \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.645428 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-utilities\") pod \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\" (UID: \"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1\") " Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.647288 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-utilities" (OuterVolumeSpecName: "utilities") pod "502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" (UID: "502f48c2-be11-4168-b9cf-8d7d9d8b3eb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.654714 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-kube-api-access-mtf54" (OuterVolumeSpecName: "kube-api-access-mtf54") pod "502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" (UID: "502f48c2-be11-4168-b9cf-8d7d9d8b3eb1"). InnerVolumeSpecName "kube-api-access-mtf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.721610 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" (UID: "502f48c2-be11-4168-b9cf-8d7d9d8b3eb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.747822 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.747848 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtf54\" (UniqueName: \"kubernetes.io/projected/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-kube-api-access-mtf54\") on node \"crc\" DevicePath \"\"" Nov 23 07:18:27 crc kubenswrapper[4681]: I1123 07:18:27.747861 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.217025 4681 generic.go:334] "Generic (PLEG): container finished" podID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerID="913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08" exitCode=0 Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.217072 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcsrv" event={"ID":"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1","Type":"ContainerDied","Data":"913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08"} Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.217101 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcsrv" event={"ID":"502f48c2-be11-4168-b9cf-8d7d9d8b3eb1","Type":"ContainerDied","Data":"4e5b51d9ace221eb9ad0a0d9a580622ae171664f1a49e5d85f3865bda06f30d9"} Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.217121 4681 scope.go:117] "RemoveContainer" containerID="913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.217266 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcsrv" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.240305 4681 scope.go:117] "RemoveContainer" containerID="9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.251902 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcsrv"] Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.257775 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kcsrv"] Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.271527 4681 scope.go:117] "RemoveContainer" containerID="b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.317040 4681 scope.go:117] "RemoveContainer" containerID="913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08" Nov 23 07:18:28 crc kubenswrapper[4681]: E1123 07:18:28.320660 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08\": container with ID starting with 913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08 not found: ID does not exist" containerID="913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.320727 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08"} err="failed to get container status \"913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08\": rpc error: code = NotFound desc = could not find container \"913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08\": container with ID starting with 913f871322027a4efd094db643871ee1e9fca77166eb5bdd9154cac1335e9a08 not found: ID does not exist" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.320766 4681 scope.go:117] "RemoveContainer" containerID="9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0" Nov 23 07:18:28 crc kubenswrapper[4681]: E1123 07:18:28.321176 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0\": container with ID starting with 9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0 not found: ID does not exist" containerID="9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.321212 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0"} err="failed to get container status \"9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0\": rpc error: code = NotFound desc = could not find container \"9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0\": container with ID starting with 9efc140e2542ee9a89d38e598bf1291a388d7c493f85dddb8860f9a5bc4166c0 not found: ID does not exist" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.321240 4681 scope.go:117] "RemoveContainer" containerID="b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553" Nov 23 07:18:28 crc kubenswrapper[4681]: E1123 07:18:28.321660 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553\": container with ID starting with b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553 not found: ID does not exist" containerID="b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553" Nov 23 07:18:28 crc kubenswrapper[4681]: I1123 07:18:28.321690 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553"} err="failed to get container status \"b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553\": rpc error: code = NotFound desc = could not find container \"b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553\": container with ID starting with b066a181c3ebdbdc627edfc4ee34f280a013d18eb0465c4665f2dc04bc6aa553 not found: ID does not exist" Nov 23 07:18:29 crc kubenswrapper[4681]: I1123 07:18:29.260578 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" path="/var/lib/kubelet/pods/502f48c2-be11-4168-b9cf-8d7d9d8b3eb1/volumes" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.697743 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bz6rr"] Nov 23 07:19:22 crc kubenswrapper[4681]: E1123 07:19:22.698422 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="registry-server" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.698437 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="registry-server" Nov 23 07:19:22 crc kubenswrapper[4681]: E1123 07:19:22.698447 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="extract-utilities" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.698453 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="extract-utilities" Nov 23 07:19:22 crc kubenswrapper[4681]: E1123 07:19:22.698497 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="extract-content" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.698504 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="extract-content" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.698733 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="502f48c2-be11-4168-b9cf-8d7d9d8b3eb1" containerName="registry-server" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.700060 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.706211 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz6rr"] Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.800555 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-utilities\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.800699 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzg6z\" (UniqueName: \"kubernetes.io/projected/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-kube-api-access-qzg6z\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.800724 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-catalog-content\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.902088 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-utilities\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.902283 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzg6z\" (UniqueName: \"kubernetes.io/projected/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-kube-api-access-qzg6z\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.902312 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-catalog-content\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.902520 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-utilities\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.902702 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-catalog-content\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:22 crc kubenswrapper[4681]: I1123 07:19:22.917996 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzg6z\" (UniqueName: \"kubernetes.io/projected/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-kube-api-access-qzg6z\") pod \"community-operators-bz6rr\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:23 crc kubenswrapper[4681]: I1123 07:19:23.019251 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:23 crc kubenswrapper[4681]: I1123 07:19:23.524926 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz6rr"] Nov 23 07:19:23 crc kubenswrapper[4681]: I1123 07:19:23.591273 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6rr" event={"ID":"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9","Type":"ContainerStarted","Data":"4b2698e90494e54b2d613c01d12f7bbd8f6c74e27534b8f3e0a965859745b186"} Nov 23 07:19:24 crc kubenswrapper[4681]: I1123 07:19:24.600281 4681 generic.go:334] "Generic (PLEG): container finished" podID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerID="7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5" exitCode=0 Nov 23 07:19:24 crc kubenswrapper[4681]: I1123 07:19:24.600327 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6rr" event={"ID":"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9","Type":"ContainerDied","Data":"7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5"} Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.483807 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2lthg"] Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.485838 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.495482 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lthg"] Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.610937 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6rr" event={"ID":"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9","Type":"ContainerStarted","Data":"d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4"} Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.655183 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-catalog-content\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.655396 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctlxk\" (UniqueName: \"kubernetes.io/projected/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-kube-api-access-ctlxk\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.655537 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-utilities\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.757135 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-utilities\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.757298 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctlxk\" (UniqueName: \"kubernetes.io/projected/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-kube-api-access-ctlxk\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.757318 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-catalog-content\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.757750 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-catalog-content\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.758000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-utilities\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.798956 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctlxk\" (UniqueName: \"kubernetes.io/projected/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-kube-api-access-ctlxk\") pod \"redhat-operators-2lthg\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:25 crc kubenswrapper[4681]: I1123 07:19:25.801014 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:26 crc kubenswrapper[4681]: I1123 07:19:26.264094 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lthg"] Nov 23 07:19:26 crc kubenswrapper[4681]: W1123 07:19:26.271527 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb12f8c7_1ad9_4b96_832a_e2cc9da82987.slice/crio-4c038a85c5f59500676a1acfae07b9bed2fe7cf11e3535eb4bace096b5d8eb0f WatchSource:0}: Error finding container 4c038a85c5f59500676a1acfae07b9bed2fe7cf11e3535eb4bace096b5d8eb0f: Status 404 returned error can't find the container with id 4c038a85c5f59500676a1acfae07b9bed2fe7cf11e3535eb4bace096b5d8eb0f Nov 23 07:19:26 crc kubenswrapper[4681]: I1123 07:19:26.619221 4681 generic.go:334] "Generic (PLEG): container finished" podID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerID="91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620" exitCode=0 Nov 23 07:19:26 crc kubenswrapper[4681]: I1123 07:19:26.619315 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lthg" event={"ID":"eb12f8c7-1ad9-4b96-832a-e2cc9da82987","Type":"ContainerDied","Data":"91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620"} Nov 23 07:19:26 crc kubenswrapper[4681]: I1123 07:19:26.619494 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lthg" event={"ID":"eb12f8c7-1ad9-4b96-832a-e2cc9da82987","Type":"ContainerStarted","Data":"4c038a85c5f59500676a1acfae07b9bed2fe7cf11e3535eb4bace096b5d8eb0f"} Nov 23 07:19:26 crc kubenswrapper[4681]: I1123 07:19:26.622778 4681 generic.go:334] "Generic (PLEG): container finished" podID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerID="d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4" exitCode=0 Nov 23 07:19:26 crc kubenswrapper[4681]: I1123 07:19:26.622808 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6rr" event={"ID":"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9","Type":"ContainerDied","Data":"d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4"} Nov 23 07:19:27 crc kubenswrapper[4681]: I1123 07:19:27.631939 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6rr" event={"ID":"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9","Type":"ContainerStarted","Data":"8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7"} Nov 23 07:19:27 crc kubenswrapper[4681]: I1123 07:19:27.636016 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lthg" event={"ID":"eb12f8c7-1ad9-4b96-832a-e2cc9da82987","Type":"ContainerStarted","Data":"bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4"} Nov 23 07:19:27 crc kubenswrapper[4681]: I1123 07:19:27.650414 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bz6rr" podStartSLOduration=3.171164245 podStartE2EDuration="5.650400859s" podCreationTimestamp="2025-11-23 07:19:22 +0000 UTC" firstStartedPulling="2025-11-23 07:19:24.602065663 +0000 UTC m=+2101.671574900" lastFinishedPulling="2025-11-23 07:19:27.081302277 +0000 UTC m=+2104.150811514" observedRunningTime="2025-11-23 07:19:27.646609293 +0000 UTC m=+2104.716118529" watchObservedRunningTime="2025-11-23 07:19:27.650400859 +0000 UTC m=+2104.719910086" Nov 23 07:19:30 crc kubenswrapper[4681]: I1123 07:19:30.664260 4681 generic.go:334] "Generic (PLEG): container finished" podID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerID="bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4" exitCode=0 Nov 23 07:19:30 crc kubenswrapper[4681]: I1123 07:19:30.664333 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lthg" event={"ID":"eb12f8c7-1ad9-4b96-832a-e2cc9da82987","Type":"ContainerDied","Data":"bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4"} Nov 23 07:19:30 crc kubenswrapper[4681]: I1123 07:19:30.669007 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:19:31 crc kubenswrapper[4681]: I1123 07:19:31.674311 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lthg" event={"ID":"eb12f8c7-1ad9-4b96-832a-e2cc9da82987","Type":"ContainerStarted","Data":"e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698"} Nov 23 07:19:31 crc kubenswrapper[4681]: I1123 07:19:31.693643 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2lthg" podStartSLOduration=2.16265743 podStartE2EDuration="6.69361678s" podCreationTimestamp="2025-11-23 07:19:25 +0000 UTC" firstStartedPulling="2025-11-23 07:19:26.620560195 +0000 UTC m=+2103.690069432" lastFinishedPulling="2025-11-23 07:19:31.151519545 +0000 UTC m=+2108.221028782" observedRunningTime="2025-11-23 07:19:31.688970863 +0000 UTC m=+2108.758480100" watchObservedRunningTime="2025-11-23 07:19:31.69361678 +0000 UTC m=+2108.763126018" Nov 23 07:19:33 crc kubenswrapper[4681]: I1123 07:19:33.021294 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:33 crc kubenswrapper[4681]: I1123 07:19:33.021592 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:33 crc kubenswrapper[4681]: I1123 07:19:33.057697 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:33 crc kubenswrapper[4681]: I1123 07:19:33.725679 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:34 crc kubenswrapper[4681]: I1123 07:19:34.896840 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4lntr"] Nov 23 07:19:34 crc kubenswrapper[4681]: I1123 07:19:34.900388 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:34 crc kubenswrapper[4681]: I1123 07:19:34.917821 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lntr"] Nov 23 07:19:34 crc kubenswrapper[4681]: I1123 07:19:34.962393 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcp7p\" (UniqueName: \"kubernetes.io/projected/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-kube-api-access-tcp7p\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:34 crc kubenswrapper[4681]: I1123 07:19:34.962475 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-catalog-content\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:34 crc kubenswrapper[4681]: I1123 07:19:34.962637 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-utilities\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.064160 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcp7p\" (UniqueName: \"kubernetes.io/projected/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-kube-api-access-tcp7p\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.064367 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-catalog-content\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.064524 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-utilities\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.064948 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-catalog-content\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.064968 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-utilities\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.089543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcp7p\" (UniqueName: \"kubernetes.io/projected/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-kube-api-access-tcp7p\") pod \"redhat-marketplace-4lntr\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.224729 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.722267 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lntr"] Nov 23 07:19:35 crc kubenswrapper[4681]: W1123 07:19:35.723035 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2bf206c_fa33_4f50_bd49_4ed73d8cc27e.slice/crio-91eee03e67e9e089fd944f9c555887073779b1887347409aa0837fee4ac9a7ef WatchSource:0}: Error finding container 91eee03e67e9e089fd944f9c555887073779b1887347409aa0837fee4ac9a7ef: Status 404 returned error can't find the container with id 91eee03e67e9e089fd944f9c555887073779b1887347409aa0837fee4ac9a7ef Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.802062 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.802135 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.880956 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz6rr"] Nov 23 07:19:35 crc kubenswrapper[4681]: I1123 07:19:35.881319 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bz6rr" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="registry-server" containerID="cri-o://8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7" gracePeriod=2 Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.321102 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.398495 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzg6z\" (UniqueName: \"kubernetes.io/projected/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-kube-api-access-qzg6z\") pod \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.398680 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-catalog-content\") pod \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.398972 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-utilities\") pod \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\" (UID: \"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9\") " Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.399546 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-utilities" (OuterVolumeSpecName: "utilities") pod "b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" (UID: "b0b51631-fdb1-44cc-b6aa-f40fb651eaa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.405708 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-kube-api-access-qzg6z" (OuterVolumeSpecName: "kube-api-access-qzg6z") pod "b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" (UID: "b0b51631-fdb1-44cc-b6aa-f40fb651eaa9"). InnerVolumeSpecName "kube-api-access-qzg6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.443944 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" (UID: "b0b51631-fdb1-44cc-b6aa-f40fb651eaa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.500905 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.501087 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzg6z\" (UniqueName: \"kubernetes.io/projected/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-kube-api-access-qzg6z\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.501162 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.796628 4681 generic.go:334] "Generic (PLEG): container finished" podID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerID="8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7" exitCode=0 Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.796918 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6rr" event={"ID":"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9","Type":"ContainerDied","Data":"8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7"} Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.797010 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6rr" event={"ID":"b0b51631-fdb1-44cc-b6aa-f40fb651eaa9","Type":"ContainerDied","Data":"4b2698e90494e54b2d613c01d12f7bbd8f6c74e27534b8f3e0a965859745b186"} Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.797077 4681 scope.go:117] "RemoveContainer" containerID="8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.797305 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6rr" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.820129 4681 generic.go:334] "Generic (PLEG): container finished" podID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerID="1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6" exitCode=0 Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.820173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lntr" event={"ID":"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e","Type":"ContainerDied","Data":"1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6"} Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.820198 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lntr" event={"ID":"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e","Type":"ContainerStarted","Data":"91eee03e67e9e089fd944f9c555887073779b1887347409aa0837fee4ac9a7ef"} Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.852247 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lthg" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="registry-server" probeResult="failure" output=< Nov 23 07:19:36 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:19:36 crc kubenswrapper[4681]: > Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.855551 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz6rr"] Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.857599 4681 scope.go:117] "RemoveContainer" containerID="d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.880544 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bz6rr"] Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.917894 4681 scope.go:117] "RemoveContainer" containerID="7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.951416 4681 scope.go:117] "RemoveContainer" containerID="8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7" Nov 23 07:19:36 crc kubenswrapper[4681]: E1123 07:19:36.953809 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7\": container with ID starting with 8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7 not found: ID does not exist" containerID="8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.953847 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7"} err="failed to get container status \"8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7\": rpc error: code = NotFound desc = could not find container \"8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7\": container with ID starting with 8c515a476876510327b7a6b0e137b5c7527c11a196c29b70914353a650eb63f7 not found: ID does not exist" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.953870 4681 scope.go:117] "RemoveContainer" containerID="d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4" Nov 23 07:19:36 crc kubenswrapper[4681]: E1123 07:19:36.954178 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4\": container with ID starting with d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4 not found: ID does not exist" containerID="d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.954241 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4"} err="failed to get container status \"d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4\": rpc error: code = NotFound desc = could not find container \"d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4\": container with ID starting with d64be1ee65602e65a0d0e38dac98bd58546d455109b7d1a877e4fc74446429a4 not found: ID does not exist" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.954283 4681 scope.go:117] "RemoveContainer" containerID="7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5" Nov 23 07:19:36 crc kubenswrapper[4681]: E1123 07:19:36.954611 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5\": container with ID starting with 7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5 not found: ID does not exist" containerID="7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5" Nov 23 07:19:36 crc kubenswrapper[4681]: I1123 07:19:36.954653 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5"} err="failed to get container status \"7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5\": rpc error: code = NotFound desc = could not find container \"7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5\": container with ID starting with 7d9c0df0a854cd3cfbbda4cfdc597c2e07ddc36e2cc7b538deaca835c2f4c9d5 not found: ID does not exist" Nov 23 07:19:37 crc kubenswrapper[4681]: I1123 07:19:37.264515 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" path="/var/lib/kubelet/pods/b0b51631-fdb1-44cc-b6aa-f40fb651eaa9/volumes" Nov 23 07:19:38 crc kubenswrapper[4681]: I1123 07:19:38.841823 4681 generic.go:334] "Generic (PLEG): container finished" podID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerID="a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9" exitCode=0 Nov 23 07:19:38 crc kubenswrapper[4681]: I1123 07:19:38.841860 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lntr" event={"ID":"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e","Type":"ContainerDied","Data":"a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9"} Nov 23 07:19:39 crc kubenswrapper[4681]: I1123 07:19:39.853743 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lntr" event={"ID":"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e","Type":"ContainerStarted","Data":"8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298"} Nov 23 07:19:39 crc kubenswrapper[4681]: I1123 07:19:39.872118 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4lntr" podStartSLOduration=3.35543868 podStartE2EDuration="5.872101571s" podCreationTimestamp="2025-11-23 07:19:34 +0000 UTC" firstStartedPulling="2025-11-23 07:19:36.821771524 +0000 UTC m=+2113.891280762" lastFinishedPulling="2025-11-23 07:19:39.338434416 +0000 UTC m=+2116.407943653" observedRunningTime="2025-11-23 07:19:39.870874387 +0000 UTC m=+2116.940383624" watchObservedRunningTime="2025-11-23 07:19:39.872101571 +0000 UTC m=+2116.941610809" Nov 23 07:19:42 crc kubenswrapper[4681]: I1123 07:19:42.881490 4681 generic.go:334] "Generic (PLEG): container finished" podID="ce2476fd-41d6-4382-82ff-bee6fe90f88c" containerID="7f3b4d10ce1b5423a237d0653a60f2c551284ddd977c595795e23e67d48b1d38" exitCode=0 Nov 23 07:19:42 crc kubenswrapper[4681]: I1123 07:19:42.881604 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" event={"ID":"ce2476fd-41d6-4382-82ff-bee6fe90f88c","Type":"ContainerDied","Data":"7f3b4d10ce1b5423a237d0653a60f2c551284ddd977c595795e23e67d48b1d38"} Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.259014 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.358235 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-1\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.358322 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px4z6\" (UniqueName: \"kubernetes.io/projected/ce2476fd-41d6-4382-82ff-bee6fe90f88c-kube-api-access-px4z6\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.358365 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-0\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.358408 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-1\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.358438 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-combined-ca-bundle\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.358618 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-inventory\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.358638 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-ssh-key\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.365161 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2476fd-41d6-4382-82ff-bee6fe90f88c-kube-api-access-px4z6" (OuterVolumeSpecName: "kube-api-access-px4z6") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "kube-api-access-px4z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.383100 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.389514 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.390643 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.392506 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-inventory" (OuterVolumeSpecName: "inventory") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.393942 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.406189 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461028 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-extra-config-0\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461070 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-0\") pod \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\" (UID: \"ce2476fd-41d6-4382-82ff-bee6fe90f88c\") " Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461667 4681 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461685 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px4z6\" (UniqueName: \"kubernetes.io/projected/ce2476fd-41d6-4382-82ff-bee6fe90f88c-kube-api-access-px4z6\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461694 4681 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461705 4681 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461716 4681 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461725 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.461732 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.482008 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.486692 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ce2476fd-41d6-4382-82ff-bee6fe90f88c" (UID: "ce2476fd-41d6-4382-82ff-bee6fe90f88c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.562890 4681 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.562918 4681 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce2476fd-41d6-4382-82ff-bee6fe90f88c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.901899 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" event={"ID":"ce2476fd-41d6-4382-82ff-bee6fe90f88c","Type":"ContainerDied","Data":"af22c1aa5c4698c3fefe64c9fd88d814d8823d1b77532eda06253099d3ff056e"} Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.901941 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af22c1aa5c4698c3fefe64c9fd88d814d8823d1b77532eda06253099d3ff056e" Nov 23 07:19:44 crc kubenswrapper[4681]: I1123 07:19:44.902041 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6j7xq" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.038369 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt"] Nov 23 07:19:45 crc kubenswrapper[4681]: E1123 07:19:45.038884 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="registry-server" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.038902 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="registry-server" Nov 23 07:19:45 crc kubenswrapper[4681]: E1123 07:19:45.038919 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="extract-utilities" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.038926 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="extract-utilities" Nov 23 07:19:45 crc kubenswrapper[4681]: E1123 07:19:45.038939 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="extract-content" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.038946 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="extract-content" Nov 23 07:19:45 crc kubenswrapper[4681]: E1123 07:19:45.038958 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2476fd-41d6-4382-82ff-bee6fe90f88c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.038965 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2476fd-41d6-4382-82ff-bee6fe90f88c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.039132 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2476fd-41d6-4382-82ff-bee6fe90f88c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.039151 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b51631-fdb1-44cc-b6aa-f40fb651eaa9" containerName="registry-server" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.039846 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.043877 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.044003 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rchgk" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.044145 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.044186 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.045738 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.048110 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt"] Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.073509 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.073643 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.073813 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.073906 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.074060 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6m5\" (UniqueName: \"kubernetes.io/projected/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-kube-api-access-mq6m5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.074224 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.074329 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.175731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.175778 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.175851 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.175875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.175907 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6m5\" (UniqueName: \"kubernetes.io/projected/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-kube-api-access-mq6m5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.175938 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.175964 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.181339 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.182032 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.182636 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.183162 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.185130 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.186201 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.190931 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6m5\" (UniqueName: \"kubernetes.io/projected/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-kube-api-access-mq6m5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-962xt\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.225119 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.225182 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.270328 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.374500 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.847174 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.898849 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.904749 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt"] Nov 23 07:19:45 crc kubenswrapper[4681]: I1123 07:19:45.954707 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:46 crc kubenswrapper[4681]: I1123 07:19:46.902340 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lthg"] Nov 23 07:19:46 crc kubenswrapper[4681]: I1123 07:19:46.918797 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" event={"ID":"d0b4fd7e-9fe1-4f06-b005-77ae1425078b","Type":"ContainerStarted","Data":"f0758011c263479a2bdf0d328446c543df14dc7d9c86b39a954eb27d59931df3"} Nov 23 07:19:46 crc kubenswrapper[4681]: I1123 07:19:46.918834 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" event={"ID":"d0b4fd7e-9fe1-4f06-b005-77ae1425078b","Type":"ContainerStarted","Data":"743eba15df11380070ba9c94f1b7bafbcf1b6fc50dafad26082b6c3eb4b74486"} Nov 23 07:19:46 crc kubenswrapper[4681]: I1123 07:19:46.919173 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2lthg" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="registry-server" containerID="cri-o://e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698" gracePeriod=2 Nov 23 07:19:46 crc kubenswrapper[4681]: I1123 07:19:46.937059 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" podStartSLOduration=1.332705424 podStartE2EDuration="1.93703809s" podCreationTimestamp="2025-11-23 07:19:45 +0000 UTC" firstStartedPulling="2025-11-23 07:19:45.913159675 +0000 UTC m=+2122.982668913" lastFinishedPulling="2025-11-23 07:19:46.517492343 +0000 UTC m=+2123.587001579" observedRunningTime="2025-11-23 07:19:46.933571365 +0000 UTC m=+2124.003080602" watchObservedRunningTime="2025-11-23 07:19:46.93703809 +0000 UTC m=+2124.006547328" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.313601 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.327943 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-utilities\") pod \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.328075 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-catalog-content\") pod \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.328150 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctlxk\" (UniqueName: \"kubernetes.io/projected/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-kube-api-access-ctlxk\") pod \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\" (UID: \"eb12f8c7-1ad9-4b96-832a-e2cc9da82987\") " Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.328528 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-utilities" (OuterVolumeSpecName: "utilities") pod "eb12f8c7-1ad9-4b96-832a-e2cc9da82987" (UID: "eb12f8c7-1ad9-4b96-832a-e2cc9da82987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.329153 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.333738 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-kube-api-access-ctlxk" (OuterVolumeSpecName: "kube-api-access-ctlxk") pod "eb12f8c7-1ad9-4b96-832a-e2cc9da82987" (UID: "eb12f8c7-1ad9-4b96-832a-e2cc9da82987"). InnerVolumeSpecName "kube-api-access-ctlxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.400336 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb12f8c7-1ad9-4b96-832a-e2cc9da82987" (UID: "eb12f8c7-1ad9-4b96-832a-e2cc9da82987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.430688 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.430715 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctlxk\" (UniqueName: \"kubernetes.io/projected/eb12f8c7-1ad9-4b96-832a-e2cc9da82987-kube-api-access-ctlxk\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.930117 4681 generic.go:334] "Generic (PLEG): container finished" podID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerID="e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698" exitCode=0 Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.930301 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lthg" event={"ID":"eb12f8c7-1ad9-4b96-832a-e2cc9da82987","Type":"ContainerDied","Data":"e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698"} Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.931101 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lthg" event={"ID":"eb12f8c7-1ad9-4b96-832a-e2cc9da82987","Type":"ContainerDied","Data":"4c038a85c5f59500676a1acfae07b9bed2fe7cf11e3535eb4bace096b5d8eb0f"} Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.931128 4681 scope.go:117] "RemoveContainer" containerID="e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.931251 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lthg" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.957712 4681 scope.go:117] "RemoveContainer" containerID="bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4" Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.961163 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lthg"] Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.966424 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2lthg"] Nov 23 07:19:47 crc kubenswrapper[4681]: I1123 07:19:47.977379 4681 scope.go:117] "RemoveContainer" containerID="91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.024364 4681 scope.go:117] "RemoveContainer" containerID="e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698" Nov 23 07:19:48 crc kubenswrapper[4681]: E1123 07:19:48.024856 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698\": container with ID starting with e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698 not found: ID does not exist" containerID="e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.024885 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698"} err="failed to get container status \"e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698\": rpc error: code = NotFound desc = could not find container \"e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698\": container with ID starting with e4d475b381e0c1eb836d2e1e62bbd20337d97b1e75c381971e40965f577d6698 not found: ID does not exist" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.024913 4681 scope.go:117] "RemoveContainer" containerID="bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4" Nov 23 07:19:48 crc kubenswrapper[4681]: E1123 07:19:48.025262 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4\": container with ID starting with bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4 not found: ID does not exist" containerID="bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.025283 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4"} err="failed to get container status \"bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4\": rpc error: code = NotFound desc = could not find container \"bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4\": container with ID starting with bbae3837feda9ea8c724645442affd44c82479bfdd003bd359e54655e0d87bc4 not found: ID does not exist" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.025298 4681 scope.go:117] "RemoveContainer" containerID="91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620" Nov 23 07:19:48 crc kubenswrapper[4681]: E1123 07:19:48.025585 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620\": container with ID starting with 91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620 not found: ID does not exist" containerID="91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.025606 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620"} err="failed to get container status \"91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620\": rpc error: code = NotFound desc = could not find container \"91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620\": container with ID starting with 91dd79d2fd7f50939471c08f95ca3157a6a673735c51754ca1eb08915a5c0620 not found: ID does not exist" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.306113 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lntr"] Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.306342 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4lntr" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="registry-server" containerID="cri-o://8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298" gracePeriod=2 Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.694081 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.760699 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-catalog-content\") pod \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.760753 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcp7p\" (UniqueName: \"kubernetes.io/projected/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-kube-api-access-tcp7p\") pod \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.760905 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-utilities\") pod \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\" (UID: \"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e\") " Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.762297 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-utilities" (OuterVolumeSpecName: "utilities") pod "b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" (UID: "b2bf206c-fa33-4f50-bd49-4ed73d8cc27e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.768779 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-kube-api-access-tcp7p" (OuterVolumeSpecName: "kube-api-access-tcp7p") pod "b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" (UID: "b2bf206c-fa33-4f50-bd49-4ed73d8cc27e"). InnerVolumeSpecName "kube-api-access-tcp7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.780581 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" (UID: "b2bf206c-fa33-4f50-bd49-4ed73d8cc27e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.863149 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.863181 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.863194 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcp7p\" (UniqueName: \"kubernetes.io/projected/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e-kube-api-access-tcp7p\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.941591 4681 generic.go:334] "Generic (PLEG): container finished" podID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerID="8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298" exitCode=0 Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.941672 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lntr" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.941692 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lntr" event={"ID":"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e","Type":"ContainerDied","Data":"8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298"} Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.942404 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lntr" event={"ID":"b2bf206c-fa33-4f50-bd49-4ed73d8cc27e","Type":"ContainerDied","Data":"91eee03e67e9e089fd944f9c555887073779b1887347409aa0837fee4ac9a7ef"} Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.942434 4681 scope.go:117] "RemoveContainer" containerID="8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.966649 4681 scope.go:117] "RemoveContainer" containerID="a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.984386 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lntr"] Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.984469 4681 scope.go:117] "RemoveContainer" containerID="1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6" Nov 23 07:19:48 crc kubenswrapper[4681]: I1123 07:19:48.993538 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lntr"] Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.021728 4681 scope.go:117] "RemoveContainer" containerID="8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298" Nov 23 07:19:49 crc kubenswrapper[4681]: E1123 07:19:49.022102 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298\": container with ID starting with 8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298 not found: ID does not exist" containerID="8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298" Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.022141 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298"} err="failed to get container status \"8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298\": rpc error: code = NotFound desc = could not find container \"8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298\": container with ID starting with 8095d72f00ee5a0d53c88cc2bd309d894297db4bad6864599ce1f8268e149298 not found: ID does not exist" Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.022168 4681 scope.go:117] "RemoveContainer" containerID="a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9" Nov 23 07:19:49 crc kubenswrapper[4681]: E1123 07:19:49.022442 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9\": container with ID starting with a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9 not found: ID does not exist" containerID="a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9" Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.022494 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9"} err="failed to get container status \"a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9\": rpc error: code = NotFound desc = could not find container \"a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9\": container with ID starting with a0ebe7253a59a4dd82ebdf323255537e64d097bba1e8222729ec131ceb4f3ea9 not found: ID does not exist" Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.022516 4681 scope.go:117] "RemoveContainer" containerID="1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6" Nov 23 07:19:49 crc kubenswrapper[4681]: E1123 07:19:49.022754 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6\": container with ID starting with 1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6 not found: ID does not exist" containerID="1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6" Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.022777 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6"} err="failed to get container status \"1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6\": rpc error: code = NotFound desc = could not find container \"1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6\": container with ID starting with 1795e06d37e8649ac2b5eeeb4b3eda489c169ee7001681a647732b677a9353e6 not found: ID does not exist" Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.260532 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" path="/var/lib/kubelet/pods/b2bf206c-fa33-4f50-bd49-4ed73d8cc27e/volumes" Nov 23 07:19:49 crc kubenswrapper[4681]: I1123 07:19:49.261177 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" path="/var/lib/kubelet/pods/eb12f8c7-1ad9-4b96-832a-e2cc9da82987/volumes" Nov 23 07:20:12 crc kubenswrapper[4681]: I1123 07:20:12.295663 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:20:12 crc kubenswrapper[4681]: I1123 07:20:12.296148 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:20:42 crc kubenswrapper[4681]: I1123 07:20:42.296792 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:20:42 crc kubenswrapper[4681]: I1123 07:20:42.297449 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.296217 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.312628 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.312712 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.313490 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.313556 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" gracePeriod=600 Nov 23 07:21:12 crc kubenswrapper[4681]: E1123 07:21:12.463397 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.617237 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" exitCode=0 Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.617308 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935"} Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.617369 4681 scope.go:117] "RemoveContainer" containerID="f8da7449317d4fedfd4d71fd5add670aef44436e65aa268d710d4cbf78c73d83" Nov 23 07:21:12 crc kubenswrapper[4681]: I1123 07:21:12.618491 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:21:12 crc kubenswrapper[4681]: E1123 07:21:12.618978 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:21:23 crc kubenswrapper[4681]: I1123 07:21:23.258530 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:21:23 crc kubenswrapper[4681]: E1123 07:21:23.260700 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:21:34 crc kubenswrapper[4681]: I1123 07:21:34.252099 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:21:34 crc kubenswrapper[4681]: E1123 07:21:34.253001 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:21:40 crc kubenswrapper[4681]: I1123 07:21:40.837095 4681 generic.go:334] "Generic (PLEG): container finished" podID="d0b4fd7e-9fe1-4f06-b005-77ae1425078b" containerID="f0758011c263479a2bdf0d328446c543df14dc7d9c86b39a954eb27d59931df3" exitCode=0 Nov 23 07:21:40 crc kubenswrapper[4681]: I1123 07:21:40.837130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" event={"ID":"d0b4fd7e-9fe1-4f06-b005-77ae1425078b","Type":"ContainerDied","Data":"f0758011c263479a2bdf0d328446c543df14dc7d9c86b39a954eb27d59931df3"} Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.153403 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.235904 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-inventory\") pod \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.235983 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-1\") pod \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.259218 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-inventory" (OuterVolumeSpecName: "inventory") pod "d0b4fd7e-9fe1-4f06-b005-77ae1425078b" (UID: "d0b4fd7e-9fe1-4f06-b005-77ae1425078b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.261422 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "d0b4fd7e-9fe1-4f06-b005-77ae1425078b" (UID: "d0b4fd7e-9fe1-4f06-b005-77ae1425078b"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.337456 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq6m5\" (UniqueName: \"kubernetes.io/projected/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-kube-api-access-mq6m5\") pod \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.337831 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-2\") pod \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.337890 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-0\") pod \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.337925 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-telemetry-combined-ca-bundle\") pod \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.338129 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ssh-key\") pod \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\" (UID: \"d0b4fd7e-9fe1-4f06-b005-77ae1425078b\") " Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.339130 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.339149 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.341839 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d0b4fd7e-9fe1-4f06-b005-77ae1425078b" (UID: "d0b4fd7e-9fe1-4f06-b005-77ae1425078b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.342005 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-kube-api-access-mq6m5" (OuterVolumeSpecName: "kube-api-access-mq6m5") pod "d0b4fd7e-9fe1-4f06-b005-77ae1425078b" (UID: "d0b4fd7e-9fe1-4f06-b005-77ae1425078b"). InnerVolumeSpecName "kube-api-access-mq6m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.359113 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d0b4fd7e-9fe1-4f06-b005-77ae1425078b" (UID: "d0b4fd7e-9fe1-4f06-b005-77ae1425078b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.359325 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "d0b4fd7e-9fe1-4f06-b005-77ae1425078b" (UID: "d0b4fd7e-9fe1-4f06-b005-77ae1425078b"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.360276 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "d0b4fd7e-9fe1-4f06-b005-77ae1425078b" (UID: "d0b4fd7e-9fe1-4f06-b005-77ae1425078b"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.441158 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq6m5\" (UniqueName: \"kubernetes.io/projected/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-kube-api-access-mq6m5\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.441193 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.441204 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.441216 4681 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.441229 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0b4fd7e-9fe1-4f06-b005-77ae1425078b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.856991 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" event={"ID":"d0b4fd7e-9fe1-4f06-b005-77ae1425078b","Type":"ContainerDied","Data":"743eba15df11380070ba9c94f1b7bafbcf1b6fc50dafad26082b6c3eb4b74486"} Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.857042 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="743eba15df11380070ba9c94f1b7bafbcf1b6fc50dafad26082b6c3eb4b74486" Nov 23 07:21:42 crc kubenswrapper[4681]: I1123 07:21:42.857048 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-962xt" Nov 23 07:21:45 crc kubenswrapper[4681]: I1123 07:21:45.251587 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:21:45 crc kubenswrapper[4681]: E1123 07:21:45.252302 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:21:57 crc kubenswrapper[4681]: I1123 07:21:57.251876 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:21:57 crc kubenswrapper[4681]: E1123 07:21:57.267349 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:22:10 crc kubenswrapper[4681]: I1123 07:22:10.252813 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:22:10 crc kubenswrapper[4681]: E1123 07:22:10.254709 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.090429 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Nov 23 07:22:17 crc kubenswrapper[4681]: E1123 07:22:17.091608 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="registry-server" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.091626 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="registry-server" Nov 23 07:22:17 crc kubenswrapper[4681]: E1123 07:22:17.091658 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="extract-utilities" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.091666 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="extract-utilities" Nov 23 07:22:17 crc kubenswrapper[4681]: E1123 07:22:17.091692 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="extract-content" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.091698 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="extract-content" Nov 23 07:22:17 crc kubenswrapper[4681]: E1123 07:22:17.091705 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="extract-utilities" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.091712 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="extract-utilities" Nov 23 07:22:17 crc kubenswrapper[4681]: E1123 07:22:17.091721 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="registry-server" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.091728 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="registry-server" Nov 23 07:22:17 crc kubenswrapper[4681]: E1123 07:22:17.091736 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b4fd7e-9fe1-4f06-b005-77ae1425078b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.091745 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b4fd7e-9fe1-4f06-b005-77ae1425078b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 07:22:17 crc kubenswrapper[4681]: E1123 07:22:17.091766 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="extract-content" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.091772 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="extract-content" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.092072 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b4fd7e-9fe1-4f06-b005-77ae1425078b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.092091 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb12f8c7-1ad9-4b96-832a-e2cc9da82987" containerName="registry-server" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.092112 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bf206c-fa33-4f50-bd49-4ed73d8cc27e" containerName="registry-server" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.093042 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.096167 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.096809 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.096931 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.097118 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9sd4j" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.107249 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.175414 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.175493 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.175587 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.277785 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.277843 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.277944 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.278076 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.278165 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zppq\" (UniqueName: \"kubernetes.io/projected/5c171cbf-074c-4685-88ae-5e1ad59e5423-kube-api-access-6zppq\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.278229 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.278262 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.278501 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.278645 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.279202 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.280316 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.285524 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.381078 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.381218 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.381318 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.381408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.381548 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zppq\" (UniqueName: \"kubernetes.io/projected/5c171cbf-074c-4685-88ae-5e1ad59e5423-kube-api-access-6zppq\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.381600 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.381959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.382503 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.383592 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.385488 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.389280 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.399422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zppq\" (UniqueName: \"kubernetes.io/projected/5c171cbf-074c-4685-88ae-5e1ad59e5423-kube-api-access-6zppq\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.411627 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.417640 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:22:17 crc kubenswrapper[4681]: I1123 07:22:17.924184 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Nov 23 07:22:17 crc kubenswrapper[4681]: W1123 07:22:17.929687 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c171cbf_074c_4685_88ae_5e1ad59e5423.slice/crio-d9937940ba4752d8dacb67a9faeb1a75ce24397f0724a1d82839bd353f988ea6 WatchSource:0}: Error finding container d9937940ba4752d8dacb67a9faeb1a75ce24397f0724a1d82839bd353f988ea6: Status 404 returned error can't find the container with id d9937940ba4752d8dacb67a9faeb1a75ce24397f0724a1d82839bd353f988ea6 Nov 23 07:22:18 crc kubenswrapper[4681]: I1123 07:22:18.177146 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"5c171cbf-074c-4685-88ae-5e1ad59e5423","Type":"ContainerStarted","Data":"d9937940ba4752d8dacb67a9faeb1a75ce24397f0724a1d82839bd353f988ea6"} Nov 23 07:22:21 crc kubenswrapper[4681]: I1123 07:22:21.252309 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:22:21 crc kubenswrapper[4681]: E1123 07:22:21.252901 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:22:33 crc kubenswrapper[4681]: I1123 07:22:33.257190 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:22:33 crc kubenswrapper[4681]: E1123 07:22:33.258339 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:22:47 crc kubenswrapper[4681]: I1123 07:22:47.256842 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:22:47 crc kubenswrapper[4681]: E1123 07:22:47.257612 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:23:00 crc kubenswrapper[4681]: I1123 07:23:00.252217 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:23:00 crc kubenswrapper[4681]: E1123 07:23:00.253163 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:23:01 crc kubenswrapper[4681]: E1123 07:23:01.359019 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 07:23:01 crc kubenswrapper[4681]: E1123 07:23:01.359457 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 07:23:01 crc kubenswrapper[4681]: E1123 07:23:01.361584 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zppq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(5c171cbf-074c-4685-88ae-5e1ad59e5423): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:23:01 crc kubenswrapper[4681]: E1123 07:23:01.362796 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="5c171cbf-074c-4685-88ae-5e1ad59e5423" Nov 23 07:23:01 crc kubenswrapper[4681]: E1123 07:23:01.707110 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="5c171cbf-074c-4685-88ae-5e1ad59e5423" Nov 23 07:23:14 crc kubenswrapper[4681]: I1123 07:23:14.252628 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:23:14 crc kubenswrapper[4681]: E1123 07:23:14.253579 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:23:14 crc kubenswrapper[4681]: I1123 07:23:14.904727 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 07:23:16 crc kubenswrapper[4681]: I1123 07:23:16.866403 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"5c171cbf-074c-4685-88ae-5e1ad59e5423","Type":"ContainerStarted","Data":"2dd2856f735b095a4436ddbfe83075a242f7ee3280a742622f17add988725659"} Nov 23 07:23:16 crc kubenswrapper[4681]: I1123 07:23:16.886797 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=3.917535286 podStartE2EDuration="1m0.88677991s" podCreationTimestamp="2025-11-23 07:22:16 +0000 UTC" firstStartedPulling="2025-11-23 07:22:17.932031123 +0000 UTC m=+2275.001540361" lastFinishedPulling="2025-11-23 07:23:14.901275759 +0000 UTC m=+2331.970784985" observedRunningTime="2025-11-23 07:23:16.885357118 +0000 UTC m=+2333.954866356" watchObservedRunningTime="2025-11-23 07:23:16.88677991 +0000 UTC m=+2333.956289147" Nov 23 07:23:29 crc kubenswrapper[4681]: I1123 07:23:29.255561 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:23:29 crc kubenswrapper[4681]: E1123 07:23:29.256775 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:23:44 crc kubenswrapper[4681]: I1123 07:23:44.252799 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:23:44 crc kubenswrapper[4681]: E1123 07:23:44.253855 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:23:58 crc kubenswrapper[4681]: I1123 07:23:58.253296 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:23:58 crc kubenswrapper[4681]: E1123 07:23:58.253971 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:24:09 crc kubenswrapper[4681]: I1123 07:24:09.251957 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:24:09 crc kubenswrapper[4681]: E1123 07:24:09.252984 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:24:23 crc kubenswrapper[4681]: I1123 07:24:23.262243 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:24:23 crc kubenswrapper[4681]: E1123 07:24:23.263034 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:24:36 crc kubenswrapper[4681]: I1123 07:24:36.253043 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:24:36 crc kubenswrapper[4681]: E1123 07:24:36.253901 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:24:48 crc kubenswrapper[4681]: I1123 07:24:48.251826 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:24:48 crc kubenswrapper[4681]: E1123 07:24:48.252798 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:25:00 crc kubenswrapper[4681]: I1123 07:25:00.252921 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:25:00 crc kubenswrapper[4681]: E1123 07:25:00.254165 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:25:14 crc kubenswrapper[4681]: I1123 07:25:14.252318 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:25:14 crc kubenswrapper[4681]: E1123 07:25:14.253363 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:25:28 crc kubenswrapper[4681]: I1123 07:25:28.252817 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:25:28 crc kubenswrapper[4681]: E1123 07:25:28.253585 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:25:40 crc kubenswrapper[4681]: I1123 07:25:40.252332 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:25:40 crc kubenswrapper[4681]: E1123 07:25:40.253374 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:25:52 crc kubenswrapper[4681]: I1123 07:25:52.251649 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:25:52 crc kubenswrapper[4681]: E1123 07:25:52.252341 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:26:05 crc kubenswrapper[4681]: I1123 07:26:05.252583 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:26:05 crc kubenswrapper[4681]: E1123 07:26:05.253274 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:26:18 crc kubenswrapper[4681]: I1123 07:26:18.252136 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:26:18 crc kubenswrapper[4681]: I1123 07:26:18.578951 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"e305db71846595ffb5ef89ffce233280d9d731be32838c5f52ee935532128d59"} Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.148638 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wjt8m"] Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.155727 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.168224 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wjt8m"] Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.348084 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55ts\" (UniqueName: \"kubernetes.io/projected/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-kube-api-access-v55ts\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.348164 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-catalog-content\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.348527 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-utilities\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.450660 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55ts\" (UniqueName: \"kubernetes.io/projected/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-kube-api-access-v55ts\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.450708 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-catalog-content\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.450809 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-utilities\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.453691 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-utilities\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.453999 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-catalog-content\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.474119 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55ts\" (UniqueName: \"kubernetes.io/projected/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-kube-api-access-v55ts\") pod \"certified-operators-wjt8m\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:19 crc kubenswrapper[4681]: I1123 07:28:19.476263 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:20 crc kubenswrapper[4681]: I1123 07:28:20.237891 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wjt8m"] Nov 23 07:28:20 crc kubenswrapper[4681]: W1123 07:28:20.254000 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3b9007a_6595_43ae_be1a_8ae7f9d46ef5.slice/crio-0ed995c2306152e455de1b0c27dda7446647de86a3ae0b9fe1d18154468b8736 WatchSource:0}: Error finding container 0ed995c2306152e455de1b0c27dda7446647de86a3ae0b9fe1d18154468b8736: Status 404 returned error can't find the container with id 0ed995c2306152e455de1b0c27dda7446647de86a3ae0b9fe1d18154468b8736 Nov 23 07:28:20 crc kubenswrapper[4681]: I1123 07:28:20.596057 4681 generic.go:334] "Generic (PLEG): container finished" podID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerID="cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2" exitCode=0 Nov 23 07:28:20 crc kubenswrapper[4681]: I1123 07:28:20.596193 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjt8m" event={"ID":"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5","Type":"ContainerDied","Data":"cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2"} Nov 23 07:28:20 crc kubenswrapper[4681]: I1123 07:28:20.596283 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjt8m" event={"ID":"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5","Type":"ContainerStarted","Data":"0ed995c2306152e455de1b0c27dda7446647de86a3ae0b9fe1d18154468b8736"} Nov 23 07:28:20 crc kubenswrapper[4681]: I1123 07:28:20.600135 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:28:21 crc kubenswrapper[4681]: I1123 07:28:21.606677 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjt8m" event={"ID":"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5","Type":"ContainerStarted","Data":"c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a"} Nov 23 07:28:22 crc kubenswrapper[4681]: I1123 07:28:22.617877 4681 generic.go:334] "Generic (PLEG): container finished" podID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerID="c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a" exitCode=0 Nov 23 07:28:22 crc kubenswrapper[4681]: I1123 07:28:22.617979 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjt8m" event={"ID":"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5","Type":"ContainerDied","Data":"c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a"} Nov 23 07:28:23 crc kubenswrapper[4681]: I1123 07:28:23.630335 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjt8m" event={"ID":"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5","Type":"ContainerStarted","Data":"6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787"} Nov 23 07:28:23 crc kubenswrapper[4681]: I1123 07:28:23.655401 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wjt8m" podStartSLOduration=2.1653041379999998 podStartE2EDuration="4.654534566s" podCreationTimestamp="2025-11-23 07:28:19 +0000 UTC" firstStartedPulling="2025-11-23 07:28:20.597769828 +0000 UTC m=+2637.667279066" lastFinishedPulling="2025-11-23 07:28:23.087000257 +0000 UTC m=+2640.156509494" observedRunningTime="2025-11-23 07:28:23.65037291 +0000 UTC m=+2640.719882147" watchObservedRunningTime="2025-11-23 07:28:23.654534566 +0000 UTC m=+2640.724043803" Nov 23 07:28:29 crc kubenswrapper[4681]: I1123 07:28:29.476978 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:29 crc kubenswrapper[4681]: I1123 07:28:29.477604 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:29 crc kubenswrapper[4681]: I1123 07:28:29.531414 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:29 crc kubenswrapper[4681]: I1123 07:28:29.722149 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:29 crc kubenswrapper[4681]: I1123 07:28:29.770798 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wjt8m"] Nov 23 07:28:31 crc kubenswrapper[4681]: I1123 07:28:31.697951 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wjt8m" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="registry-server" containerID="cri-o://6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787" gracePeriod=2 Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.304030 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.396739 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v55ts\" (UniqueName: \"kubernetes.io/projected/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-kube-api-access-v55ts\") pod \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.397106 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-utilities\") pod \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.397301 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-catalog-content\") pod \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\" (UID: \"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5\") " Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.400766 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-utilities" (OuterVolumeSpecName: "utilities") pod "f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" (UID: "f3b9007a-6595-43ae-be1a-8ae7f9d46ef5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.417656 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-kube-api-access-v55ts" (OuterVolumeSpecName: "kube-api-access-v55ts") pod "f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" (UID: "f3b9007a-6595-43ae-be1a-8ae7f9d46ef5"). InnerVolumeSpecName "kube-api-access-v55ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.444743 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" (UID: "f3b9007a-6595-43ae-be1a-8ae7f9d46ef5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.503859 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v55ts\" (UniqueName: \"kubernetes.io/projected/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-kube-api-access-v55ts\") on node \"crc\" DevicePath \"\"" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.503914 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.503928 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.708629 4681 generic.go:334] "Generic (PLEG): container finished" podID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerID="6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787" exitCode=0 Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.708692 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjt8m" event={"ID":"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5","Type":"ContainerDied","Data":"6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787"} Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.708722 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjt8m" event={"ID":"f3b9007a-6595-43ae-be1a-8ae7f9d46ef5","Type":"ContainerDied","Data":"0ed995c2306152e455de1b0c27dda7446647de86a3ae0b9fe1d18154468b8736"} Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.708880 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjt8m" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.714686 4681 scope.go:117] "RemoveContainer" containerID="6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.740339 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wjt8m"] Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.755295 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wjt8m"] Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.756570 4681 scope.go:117] "RemoveContainer" containerID="c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.793016 4681 scope.go:117] "RemoveContainer" containerID="cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.837692 4681 scope.go:117] "RemoveContainer" containerID="6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787" Nov 23 07:28:32 crc kubenswrapper[4681]: E1123 07:28:32.841053 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787\": container with ID starting with 6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787 not found: ID does not exist" containerID="6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.842053 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787"} err="failed to get container status \"6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787\": rpc error: code = NotFound desc = could not find container \"6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787\": container with ID starting with 6c562e7a4c32b8495d768c09465f8e93a7850fe54ed2d7c98050cf3922205787 not found: ID does not exist" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.842103 4681 scope.go:117] "RemoveContainer" containerID="c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a" Nov 23 07:28:32 crc kubenswrapper[4681]: E1123 07:28:32.842390 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a\": container with ID starting with c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a not found: ID does not exist" containerID="c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.842430 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a"} err="failed to get container status \"c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a\": rpc error: code = NotFound desc = could not find container \"c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a\": container with ID starting with c49c4013f215e25e6e6f5fcaf5369e91b9bd7f032cc2ca2691649db530539b7a not found: ID does not exist" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.842444 4681 scope.go:117] "RemoveContainer" containerID="cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2" Nov 23 07:28:32 crc kubenswrapper[4681]: E1123 07:28:32.842751 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2\": container with ID starting with cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2 not found: ID does not exist" containerID="cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2" Nov 23 07:28:32 crc kubenswrapper[4681]: I1123 07:28:32.842771 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2"} err="failed to get container status \"cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2\": rpc error: code = NotFound desc = could not find container \"cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2\": container with ID starting with cdbf775adea2f7e06b6a3ee5a04ccacde3ff759ee5b1b4aae536ea2a3410bcf2 not found: ID does not exist" Nov 23 07:28:33 crc kubenswrapper[4681]: I1123 07:28:33.261766 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" path="/var/lib/kubelet/pods/f3b9007a-6595-43ae-be1a-8ae7f9d46ef5/volumes" Nov 23 07:28:42 crc kubenswrapper[4681]: I1123 07:28:42.296169 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:28:42 crc kubenswrapper[4681]: I1123 07:28:42.296529 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:29:12 crc kubenswrapper[4681]: I1123 07:29:12.295727 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:29:12 crc kubenswrapper[4681]: I1123 07:29:12.296130 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:29:42 crc kubenswrapper[4681]: I1123 07:29:42.295851 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:29:42 crc kubenswrapper[4681]: I1123 07:29:42.296235 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:29:42 crc kubenswrapper[4681]: I1123 07:29:42.296281 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:29:42 crc kubenswrapper[4681]: I1123 07:29:42.297087 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e305db71846595ffb5ef89ffce233280d9d731be32838c5f52ee935532128d59"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:29:42 crc kubenswrapper[4681]: I1123 07:29:42.297147 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://e305db71846595ffb5ef89ffce233280d9d731be32838c5f52ee935532128d59" gracePeriod=600 Nov 23 07:29:43 crc kubenswrapper[4681]: I1123 07:29:43.246356 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="e305db71846595ffb5ef89ffce233280d9d731be32838c5f52ee935532128d59" exitCode=0 Nov 23 07:29:43 crc kubenswrapper[4681]: I1123 07:29:43.246446 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"e305db71846595ffb5ef89ffce233280d9d731be32838c5f52ee935532128d59"} Nov 23 07:29:43 crc kubenswrapper[4681]: I1123 07:29:43.246748 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b"} Nov 23 07:29:43 crc kubenswrapper[4681]: I1123 07:29:43.246810 4681 scope.go:117] "RemoveContainer" containerID="6395679e8b90303362ef082b92adb8b7a5b62d563d9f789557862ea185bce935" Nov 23 07:29:54 crc kubenswrapper[4681]: I1123 07:29:54.981034 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4jpm2"] Nov 23 07:29:54 crc kubenswrapper[4681]: E1123 07:29:54.984748 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="registry-server" Nov 23 07:29:54 crc kubenswrapper[4681]: I1123 07:29:54.984789 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="registry-server" Nov 23 07:29:54 crc kubenswrapper[4681]: E1123 07:29:54.984865 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="extract-content" Nov 23 07:29:54 crc kubenswrapper[4681]: I1123 07:29:54.984872 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="extract-content" Nov 23 07:29:54 crc kubenswrapper[4681]: E1123 07:29:54.984903 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="extract-utilities" Nov 23 07:29:54 crc kubenswrapper[4681]: I1123 07:29:54.984915 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="extract-utilities" Nov 23 07:29:54 crc kubenswrapper[4681]: I1123 07:29:54.986128 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b9007a-6595-43ae-be1a-8ae7f9d46ef5" containerName="registry-server" Nov 23 07:29:54 crc kubenswrapper[4681]: I1123 07:29:54.991667 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:54 crc kubenswrapper[4681]: I1123 07:29:54.998700 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jpm2"] Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.025679 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wtz\" (UniqueName: \"kubernetes.io/projected/92e58a56-6aed-4a34-acb9-7b9e9790018b-kube-api-access-w6wtz\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.025768 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e58a56-6aed-4a34-acb9-7b9e9790018b-catalog-content\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.025807 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e58a56-6aed-4a34-acb9-7b9e9790018b-utilities\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.127086 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wtz\" (UniqueName: \"kubernetes.io/projected/92e58a56-6aed-4a34-acb9-7b9e9790018b-kube-api-access-w6wtz\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.127174 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e58a56-6aed-4a34-acb9-7b9e9790018b-catalog-content\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.127202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e58a56-6aed-4a34-acb9-7b9e9790018b-utilities\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.128594 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e58a56-6aed-4a34-acb9-7b9e9790018b-catalog-content\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.128807 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e58a56-6aed-4a34-acb9-7b9e9790018b-utilities\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.153436 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wtz\" (UniqueName: \"kubernetes.io/projected/92e58a56-6aed-4a34-acb9-7b9e9790018b-kube-api-access-w6wtz\") pod \"community-operators-4jpm2\" (UID: \"92e58a56-6aed-4a34-acb9-7b9e9790018b\") " pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: I1123 07:29:55.311965 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:29:55 crc kubenswrapper[4681]: E1123 07:29:55.467639 4681 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.82:53912->192.168.26.82:41655: write tcp 192.168.26.82:53912->192.168.26.82:41655: write: connection reset by peer Nov 23 07:29:56 crc kubenswrapper[4681]: I1123 07:29:56.066615 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jpm2"] Nov 23 07:29:56 crc kubenswrapper[4681]: I1123 07:29:56.357848 4681 generic.go:334] "Generic (PLEG): container finished" podID="92e58a56-6aed-4a34-acb9-7b9e9790018b" containerID="aa4d2af4bf9a56bff7be815a78a23aae38cfd06c2e0cc35164cd313304f70dcb" exitCode=0 Nov 23 07:29:56 crc kubenswrapper[4681]: I1123 07:29:56.357950 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jpm2" event={"ID":"92e58a56-6aed-4a34-acb9-7b9e9790018b","Type":"ContainerDied","Data":"aa4d2af4bf9a56bff7be815a78a23aae38cfd06c2e0cc35164cd313304f70dcb"} Nov 23 07:29:56 crc kubenswrapper[4681]: I1123 07:29:56.358080 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jpm2" event={"ID":"92e58a56-6aed-4a34-acb9-7b9e9790018b","Type":"ContainerStarted","Data":"fb567d0aa69caceecb07d3ae8e0aed4a7f5e50b06b27c5708614f10ea41e2611"} Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.343921 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s"] Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.355892 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.432611 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gm5\" (UniqueName: \"kubernetes.io/projected/712f3249-3396-4198-85ad-a74af10b9c24-kube-api-access-d4gm5\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.432981 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/712f3249-3396-4198-85ad-a74af10b9c24-config-volume\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.433079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/712f3249-3396-4198-85ad-a74af10b9c24-secret-volume\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.433252 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.433689 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.461284 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s"] Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.536176 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/712f3249-3396-4198-85ad-a74af10b9c24-config-volume\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.536259 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/712f3249-3396-4198-85ad-a74af10b9c24-secret-volume\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.536396 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4gm5\" (UniqueName: \"kubernetes.io/projected/712f3249-3396-4198-85ad-a74af10b9c24-kube-api-access-d4gm5\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.537422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/712f3249-3396-4198-85ad-a74af10b9c24-config-volume\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.550945 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4gm5\" (UniqueName: \"kubernetes.io/projected/712f3249-3396-4198-85ad-a74af10b9c24-kube-api-access-d4gm5\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.560238 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/712f3249-3396-4198-85ad-a74af10b9c24-secret-volume\") pod \"collect-profiles-29398050-pqx4s\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:00 crc kubenswrapper[4681]: I1123 07:30:00.713926 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:03 crc kubenswrapper[4681]: I1123 07:30:03.450261 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jpm2" event={"ID":"92e58a56-6aed-4a34-acb9-7b9e9790018b","Type":"ContainerStarted","Data":"7c92a89148fbaa68242623696d234137d8bff8c6b52c71c6fb49e4d208a5461f"} Nov 23 07:30:03 crc kubenswrapper[4681]: I1123 07:30:03.762940 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s"] Nov 23 07:30:04 crc kubenswrapper[4681]: I1123 07:30:04.459960 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" event={"ID":"712f3249-3396-4198-85ad-a74af10b9c24","Type":"ContainerStarted","Data":"1400425a50a57d3e4717335fe26b4dff258a4d9dd7a31eef5ba7e90660b4ab89"} Nov 23 07:30:04 crc kubenswrapper[4681]: I1123 07:30:04.460367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" event={"ID":"712f3249-3396-4198-85ad-a74af10b9c24","Type":"ContainerStarted","Data":"ea11c450b4e94923f8015b04f66690261cebbf3a6368e79152a56d2d5af792da"} Nov 23 07:30:04 crc kubenswrapper[4681]: I1123 07:30:04.477740 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" podStartSLOduration=4.476602601 podStartE2EDuration="4.476602601s" podCreationTimestamp="2025-11-23 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:30:04.474388889 +0000 UTC m=+2741.543898126" watchObservedRunningTime="2025-11-23 07:30:04.476602601 +0000 UTC m=+2741.546111838" Nov 23 07:30:05 crc kubenswrapper[4681]: I1123 07:30:05.472972 4681 generic.go:334] "Generic (PLEG): container finished" podID="712f3249-3396-4198-85ad-a74af10b9c24" containerID="1400425a50a57d3e4717335fe26b4dff258a4d9dd7a31eef5ba7e90660b4ab89" exitCode=0 Nov 23 07:30:05 crc kubenswrapper[4681]: I1123 07:30:05.473083 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" event={"ID":"712f3249-3396-4198-85ad-a74af10b9c24","Type":"ContainerDied","Data":"1400425a50a57d3e4717335fe26b4dff258a4d9dd7a31eef5ba7e90660b4ab89"} Nov 23 07:30:05 crc kubenswrapper[4681]: I1123 07:30:05.474869 4681 generic.go:334] "Generic (PLEG): container finished" podID="92e58a56-6aed-4a34-acb9-7b9e9790018b" containerID="7c92a89148fbaa68242623696d234137d8bff8c6b52c71c6fb49e4d208a5461f" exitCode=0 Nov 23 07:30:05 crc kubenswrapper[4681]: I1123 07:30:05.474904 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jpm2" event={"ID":"92e58a56-6aed-4a34-acb9-7b9e9790018b","Type":"ContainerDied","Data":"7c92a89148fbaa68242623696d234137d8bff8c6b52c71c6fb49e4d208a5461f"} Nov 23 07:30:06 crc kubenswrapper[4681]: I1123 07:30:06.485490 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jpm2" event={"ID":"92e58a56-6aed-4a34-acb9-7b9e9790018b","Type":"ContainerStarted","Data":"07bff2799b279b0ac8dc5d14bc991329182d9b5fc8d791d064dd3541ace52ec6"} Nov 23 07:30:06 crc kubenswrapper[4681]: I1123 07:30:06.513044 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4jpm2" podStartSLOduration=2.796233247 podStartE2EDuration="12.513025404s" podCreationTimestamp="2025-11-23 07:29:54 +0000 UTC" firstStartedPulling="2025-11-23 07:29:56.359306371 +0000 UTC m=+2733.428815609" lastFinishedPulling="2025-11-23 07:30:06.07609853 +0000 UTC m=+2743.145607766" observedRunningTime="2025-11-23 07:30:06.505903971 +0000 UTC m=+2743.575413207" watchObservedRunningTime="2025-11-23 07:30:06.513025404 +0000 UTC m=+2743.582534641" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.088934 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.214131 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/712f3249-3396-4198-85ad-a74af10b9c24-config-volume\") pod \"712f3249-3396-4198-85ad-a74af10b9c24\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.214191 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4gm5\" (UniqueName: \"kubernetes.io/projected/712f3249-3396-4198-85ad-a74af10b9c24-kube-api-access-d4gm5\") pod \"712f3249-3396-4198-85ad-a74af10b9c24\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.214365 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/712f3249-3396-4198-85ad-a74af10b9c24-secret-volume\") pod \"712f3249-3396-4198-85ad-a74af10b9c24\" (UID: \"712f3249-3396-4198-85ad-a74af10b9c24\") " Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.217510 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/712f3249-3396-4198-85ad-a74af10b9c24-config-volume" (OuterVolumeSpecName: "config-volume") pod "712f3249-3396-4198-85ad-a74af10b9c24" (UID: "712f3249-3396-4198-85ad-a74af10b9c24"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.268603 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712f3249-3396-4198-85ad-a74af10b9c24-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "712f3249-3396-4198-85ad-a74af10b9c24" (UID: "712f3249-3396-4198-85ad-a74af10b9c24"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.275704 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712f3249-3396-4198-85ad-a74af10b9c24-kube-api-access-d4gm5" (OuterVolumeSpecName: "kube-api-access-d4gm5") pod "712f3249-3396-4198-85ad-a74af10b9c24" (UID: "712f3249-3396-4198-85ad-a74af10b9c24"). InnerVolumeSpecName "kube-api-access-d4gm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.316889 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/712f3249-3396-4198-85ad-a74af10b9c24-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.316918 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4gm5\" (UniqueName: \"kubernetes.io/projected/712f3249-3396-4198-85ad-a74af10b9c24-kube-api-access-d4gm5\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.316931 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/712f3249-3396-4198-85ad-a74af10b9c24-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.503110 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" event={"ID":"712f3249-3396-4198-85ad-a74af10b9c24","Type":"ContainerDied","Data":"ea11c450b4e94923f8015b04f66690261cebbf3a6368e79152a56d2d5af792da"} Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.503703 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea11c450b4e94923f8015b04f66690261cebbf3a6368e79152a56d2d5af792da" Nov 23 07:30:07 crc kubenswrapper[4681]: I1123 07:30:07.503171 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s" Nov 23 07:30:08 crc kubenswrapper[4681]: I1123 07:30:08.286593 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l"] Nov 23 07:30:08 crc kubenswrapper[4681]: I1123 07:30:08.307524 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-5x47l"] Nov 23 07:30:09 crc kubenswrapper[4681]: I1123 07:30:09.263207 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882fc762-16ff-41a8-917d-e6b327a4adb5" path="/var/lib/kubelet/pods/882fc762-16ff-41a8-917d-e6b327a4adb5/volumes" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.655473 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w4p6m"] Nov 23 07:30:11 crc kubenswrapper[4681]: E1123 07:30:11.659418 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712f3249-3396-4198-85ad-a74af10b9c24" containerName="collect-profiles" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.659450 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="712f3249-3396-4198-85ad-a74af10b9c24" containerName="collect-profiles" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.660429 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="712f3249-3396-4198-85ad-a74af10b9c24" containerName="collect-profiles" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.665185 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.713922 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-utilities\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.714444 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khdgf\" (UniqueName: \"kubernetes.io/projected/022e2817-2e83-4dfa-9869-62602fcca3e1-kube-api-access-khdgf\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.714574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-catalog-content\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.817729 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-catalog-content\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.817966 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-utilities\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.818312 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khdgf\" (UniqueName: \"kubernetes.io/projected/022e2817-2e83-4dfa-9869-62602fcca3e1-kube-api-access-khdgf\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.824523 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-catalog-content\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.824600 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-utilities\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.880308 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khdgf\" (UniqueName: \"kubernetes.io/projected/022e2817-2e83-4dfa-9869-62602fcca3e1-kube-api-access-khdgf\") pod \"redhat-operators-w4p6m\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.931767 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w4p6m"] Nov 23 07:30:11 crc kubenswrapper[4681]: I1123 07:30:11.990796 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:13 crc kubenswrapper[4681]: I1123 07:30:13.198767 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w4p6m"] Nov 23 07:30:13 crc kubenswrapper[4681]: I1123 07:30:13.559141 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4p6m" event={"ID":"022e2817-2e83-4dfa-9869-62602fcca3e1","Type":"ContainerDied","Data":"8f770ea70e1e952efdae73dc29fedd841edb7c5cad5e6d3b06ace6086a4aa7c2"} Nov 23 07:30:13 crc kubenswrapper[4681]: I1123 07:30:13.559311 4681 generic.go:334] "Generic (PLEG): container finished" podID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerID="8f770ea70e1e952efdae73dc29fedd841edb7c5cad5e6d3b06ace6086a4aa7c2" exitCode=0 Nov 23 07:30:13 crc kubenswrapper[4681]: I1123 07:30:13.559570 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4p6m" event={"ID":"022e2817-2e83-4dfa-9869-62602fcca3e1","Type":"ContainerStarted","Data":"c9b3936856b0920e83d3b926c7656bc221138edf24ff0ba2f36f5abcd3a0a3ac"} Nov 23 07:30:13 crc kubenswrapper[4681]: E1123 07:30:13.702152 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod022e2817_2e83_4dfa_9869_62602fcca3e1.slice/crio-conmon-8f770ea70e1e952efdae73dc29fedd841edb7c5cad5e6d3b06ace6086a4aa7c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod022e2817_2e83_4dfa_9869_62602fcca3e1.slice/crio-8f770ea70e1e952efdae73dc29fedd841edb7c5cad5e6d3b06ace6086a4aa7c2.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:30:14 crc kubenswrapper[4681]: I1123 07:30:14.572776 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4p6m" event={"ID":"022e2817-2e83-4dfa-9869-62602fcca3e1","Type":"ContainerStarted","Data":"21d99b4a7736406f447913c7609b281c5e2badfffe35c6aa4e5b4c51af7f18c5"} Nov 23 07:30:15 crc kubenswrapper[4681]: I1123 07:30:15.312323 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:30:15 crc kubenswrapper[4681]: I1123 07:30:15.312656 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:30:16 crc kubenswrapper[4681]: I1123 07:30:16.548765 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4jpm2" podUID="92e58a56-6aed-4a34-acb9-7b9e9790018b" containerName="registry-server" probeResult="failure" output=< Nov 23 07:30:16 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:30:16 crc kubenswrapper[4681]: > Nov 23 07:30:19 crc kubenswrapper[4681]: I1123 07:30:19.617786 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4p6m" event={"ID":"022e2817-2e83-4dfa-9869-62602fcca3e1","Type":"ContainerDied","Data":"21d99b4a7736406f447913c7609b281c5e2badfffe35c6aa4e5b4c51af7f18c5"} Nov 23 07:30:19 crc kubenswrapper[4681]: I1123 07:30:19.617799 4681 generic.go:334] "Generic (PLEG): container finished" podID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerID="21d99b4a7736406f447913c7609b281c5e2badfffe35c6aa4e5b4c51af7f18c5" exitCode=0 Nov 23 07:30:20 crc kubenswrapper[4681]: I1123 07:30:20.630023 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4p6m" event={"ID":"022e2817-2e83-4dfa-9869-62602fcca3e1","Type":"ContainerStarted","Data":"c303bef3e5140049606154b1e86f6bf0d10b89fc19aacef5ef5413a96ccdf32e"} Nov 23 07:30:20 crc kubenswrapper[4681]: I1123 07:30:20.651898 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w4p6m" podStartSLOduration=2.97775518 podStartE2EDuration="9.650691149s" podCreationTimestamp="2025-11-23 07:30:11 +0000 UTC" firstStartedPulling="2025-11-23 07:30:13.560574575 +0000 UTC m=+2750.630083812" lastFinishedPulling="2025-11-23 07:30:20.233510544 +0000 UTC m=+2757.303019781" observedRunningTime="2025-11-23 07:30:20.648412073 +0000 UTC m=+2757.717921310" watchObservedRunningTime="2025-11-23 07:30:20.650691149 +0000 UTC m=+2757.720200387" Nov 23 07:30:21 crc kubenswrapper[4681]: I1123 07:30:21.992784 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:21 crc kubenswrapper[4681]: I1123 07:30:21.992896 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:23 crc kubenswrapper[4681]: I1123 07:30:23.076588 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4p6m" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="registry-server" probeResult="failure" output=< Nov 23 07:30:23 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:30:23 crc kubenswrapper[4681]: > Nov 23 07:30:25 crc kubenswrapper[4681]: I1123 07:30:25.361593 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:30:25 crc kubenswrapper[4681]: I1123 07:30:25.413322 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4jpm2" Nov 23 07:30:26 crc kubenswrapper[4681]: I1123 07:30:26.037873 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jpm2"] Nov 23 07:30:26 crc kubenswrapper[4681]: I1123 07:30:26.237832 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs7np"] Nov 23 07:30:26 crc kubenswrapper[4681]: I1123 07:30:26.240443 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xs7np" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="registry-server" containerID="cri-o://faa3be77291c7a00e9a80e713312de5e94f13e6ae8396f0dec5ce80b7c857576" gracePeriod=2 Nov 23 07:30:26 crc kubenswrapper[4681]: I1123 07:30:26.690424 4681 generic.go:334] "Generic (PLEG): container finished" podID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerID="faa3be77291c7a00e9a80e713312de5e94f13e6ae8396f0dec5ce80b7c857576" exitCode=0 Nov 23 07:30:26 crc kubenswrapper[4681]: I1123 07:30:26.690506 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs7np" event={"ID":"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39","Type":"ContainerDied","Data":"faa3be77291c7a00e9a80e713312de5e94f13e6ae8396f0dec5ce80b7c857576"} Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.296080 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs7np" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.409887 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-468zz\" (UniqueName: \"kubernetes.io/projected/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-kube-api-access-468zz\") pod \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.410144 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-catalog-content\") pod \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.410288 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-utilities\") pod \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\" (UID: \"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39\") " Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.418130 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-utilities" (OuterVolumeSpecName: "utilities") pod "a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" (UID: "a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.448025 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-kube-api-access-468zz" (OuterVolumeSpecName: "kube-api-access-468zz") pod "a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" (UID: "a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39"). InnerVolumeSpecName "kube-api-access-468zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.516129 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.516428 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-468zz\" (UniqueName: \"kubernetes.io/projected/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-kube-api-access-468zz\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.551590 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" (UID: "a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.619240 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.702027 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs7np" event={"ID":"a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39","Type":"ContainerDied","Data":"0e27211a33208f6762f02e43d89b784638f15f31ef8a7b869a007065e8c1c578"} Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.702145 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs7np" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.702385 4681 scope.go:117] "RemoveContainer" containerID="faa3be77291c7a00e9a80e713312de5e94f13e6ae8396f0dec5ce80b7c857576" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.740326 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs7np"] Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.747409 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xs7np"] Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.748029 4681 scope.go:117] "RemoveContainer" containerID="39ca7fb3d78fc592819e641165de5f5670907ca18e3ecd9cd75a7faae7eedc80" Nov 23 07:30:27 crc kubenswrapper[4681]: I1123 07:30:27.781286 4681 scope.go:117] "RemoveContainer" containerID="c8682d07698e6e870831970c8b69b68b675cdeaed5eb69f5e6afccee86a991c7" Nov 23 07:30:29 crc kubenswrapper[4681]: I1123 07:30:29.264177 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" path="/var/lib/kubelet/pods/a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39/volumes" Nov 23 07:30:33 crc kubenswrapper[4681]: I1123 07:30:33.094940 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4p6m" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="registry-server" probeResult="failure" output=< Nov 23 07:30:33 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:30:33 crc kubenswrapper[4681]: > Nov 23 07:30:36 crc kubenswrapper[4681]: I1123 07:30:36.933549 4681 patch_prober.go:28] interesting pod/oauth-openshift-77bfbb8d5b-pqrss container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 07:30:36 crc kubenswrapper[4681]: I1123 07:30:36.934943 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" podUID="416e7577-b33c-4406-aae2-68effb4e54be" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:30:36 crc kubenswrapper[4681]: I1123 07:30:36.950670 4681 patch_prober.go:28] interesting pod/oauth-openshift-77bfbb8d5b-pqrss container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 07:30:36 crc kubenswrapper[4681]: I1123 07:30:36.950742 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-77bfbb8d5b-pqrss" podUID="416e7577-b33c-4406-aae2-68effb4e54be" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:30:43 crc kubenswrapper[4681]: I1123 07:30:43.036630 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4p6m" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="registry-server" probeResult="failure" output=< Nov 23 07:30:43 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:30:43 crc kubenswrapper[4681]: > Nov 23 07:30:46 crc kubenswrapper[4681]: I1123 07:30:46.519920 4681 scope.go:117] "RemoveContainer" containerID="2c7079e9d2755aa8d092108a943e2f2d6759a6862746e953824159a3f4a15531" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.459569 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zttlm"] Nov 23 07:30:48 crc kubenswrapper[4681]: E1123 07:30:48.462306 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="registry-server" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.462331 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="registry-server" Nov 23 07:30:48 crc kubenswrapper[4681]: E1123 07:30:48.462798 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="extract-content" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.462806 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="extract-content" Nov 23 07:30:48 crc kubenswrapper[4681]: E1123 07:30:48.462822 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="extract-utilities" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.462828 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="extract-utilities" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.464186 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cd2dfb-b72c-4fad-9a4d-13dd73dcbb39" containerName="registry-server" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.469632 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.499198 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-utilities\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.499366 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2chn\" (UniqueName: \"kubernetes.io/projected/2f2c49df-a492-4902-9027-399f1933f5b2-kube-api-access-q2chn\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.499447 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-catalog-content\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.601478 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2chn\" (UniqueName: \"kubernetes.io/projected/2f2c49df-a492-4902-9027-399f1933f5b2-kube-api-access-q2chn\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.601823 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-catalog-content\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.602117 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-utilities\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.604158 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-catalog-content\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.604949 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-utilities\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.654824 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2chn\" (UniqueName: \"kubernetes.io/projected/2f2c49df-a492-4902-9027-399f1933f5b2-kube-api-access-q2chn\") pod \"redhat-marketplace-zttlm\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.669637 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zttlm"] Nov 23 07:30:48 crc kubenswrapper[4681]: I1123 07:30:48.800212 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:49 crc kubenswrapper[4681]: I1123 07:30:49.845885 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zttlm"] Nov 23 07:30:49 crc kubenswrapper[4681]: I1123 07:30:49.929167 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zttlm" event={"ID":"2f2c49df-a492-4902-9027-399f1933f5b2","Type":"ContainerStarted","Data":"8f2f3c7261dd6dc7f9974c9fb8354f40f4a86f9c12954b2553f76d5ab85e61f0"} Nov 23 07:30:50 crc kubenswrapper[4681]: I1123 07:30:50.955340 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zttlm" event={"ID":"2f2c49df-a492-4902-9027-399f1933f5b2","Type":"ContainerDied","Data":"c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2"} Nov 23 07:30:50 crc kubenswrapper[4681]: I1123 07:30:50.955574 4681 generic.go:334] "Generic (PLEG): container finished" podID="2f2c49df-a492-4902-9027-399f1933f5b2" containerID="c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2" exitCode=0 Nov 23 07:30:51 crc kubenswrapper[4681]: I1123 07:30:51.965019 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zttlm" event={"ID":"2f2c49df-a492-4902-9027-399f1933f5b2","Type":"ContainerStarted","Data":"7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c"} Nov 23 07:30:52 crc kubenswrapper[4681]: I1123 07:30:52.055849 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:52 crc kubenswrapper[4681]: I1123 07:30:52.094963 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:52 crc kubenswrapper[4681]: I1123 07:30:52.975764 4681 generic.go:334] "Generic (PLEG): container finished" podID="2f2c49df-a492-4902-9027-399f1933f5b2" containerID="7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c" exitCode=0 Nov 23 07:30:52 crc kubenswrapper[4681]: I1123 07:30:52.975811 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zttlm" event={"ID":"2f2c49df-a492-4902-9027-399f1933f5b2","Type":"ContainerDied","Data":"7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c"} Nov 23 07:30:53 crc kubenswrapper[4681]: I1123 07:30:53.987760 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zttlm" event={"ID":"2f2c49df-a492-4902-9027-399f1933f5b2","Type":"ContainerStarted","Data":"77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c"} Nov 23 07:30:54 crc kubenswrapper[4681]: I1123 07:30:54.013321 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zttlm" podStartSLOduration=4.413479387 podStartE2EDuration="7.012652522s" podCreationTimestamp="2025-11-23 07:30:47 +0000 UTC" firstStartedPulling="2025-11-23 07:30:50.961171211 +0000 UTC m=+2788.030680448" lastFinishedPulling="2025-11-23 07:30:53.560344347 +0000 UTC m=+2790.629853583" observedRunningTime="2025-11-23 07:30:54.007599258 +0000 UTC m=+2791.077108495" watchObservedRunningTime="2025-11-23 07:30:54.012652522 +0000 UTC m=+2791.082161759" Nov 23 07:30:54 crc kubenswrapper[4681]: I1123 07:30:54.471533 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w4p6m"] Nov 23 07:30:54 crc kubenswrapper[4681]: I1123 07:30:54.472665 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w4p6m" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="registry-server" containerID="cri-o://c303bef3e5140049606154b1e86f6bf0d10b89fc19aacef5ef5413a96ccdf32e" gracePeriod=2 Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.004554 4681 generic.go:334] "Generic (PLEG): container finished" podID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerID="c303bef3e5140049606154b1e86f6bf0d10b89fc19aacef5ef5413a96ccdf32e" exitCode=0 Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.004628 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4p6m" event={"ID":"022e2817-2e83-4dfa-9869-62602fcca3e1","Type":"ContainerDied","Data":"c303bef3e5140049606154b1e86f6bf0d10b89fc19aacef5ef5413a96ccdf32e"} Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.436379 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.576634 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-utilities\") pod \"022e2817-2e83-4dfa-9869-62602fcca3e1\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.577097 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khdgf\" (UniqueName: \"kubernetes.io/projected/022e2817-2e83-4dfa-9869-62602fcca3e1-kube-api-access-khdgf\") pod \"022e2817-2e83-4dfa-9869-62602fcca3e1\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.577287 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-catalog-content\") pod \"022e2817-2e83-4dfa-9869-62602fcca3e1\" (UID: \"022e2817-2e83-4dfa-9869-62602fcca3e1\") " Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.579138 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-utilities" (OuterVolumeSpecName: "utilities") pod "022e2817-2e83-4dfa-9869-62602fcca3e1" (UID: "022e2817-2e83-4dfa-9869-62602fcca3e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.599677 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022e2817-2e83-4dfa-9869-62602fcca3e1-kube-api-access-khdgf" (OuterVolumeSpecName: "kube-api-access-khdgf") pod "022e2817-2e83-4dfa-9869-62602fcca3e1" (UID: "022e2817-2e83-4dfa-9869-62602fcca3e1"). InnerVolumeSpecName "kube-api-access-khdgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.634889 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "022e2817-2e83-4dfa-9869-62602fcca3e1" (UID: "022e2817-2e83-4dfa-9869-62602fcca3e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.681197 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khdgf\" (UniqueName: \"kubernetes.io/projected/022e2817-2e83-4dfa-9869-62602fcca3e1-kube-api-access-khdgf\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.681232 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:55 crc kubenswrapper[4681]: I1123 07:30:55.681245 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022e2817-2e83-4dfa-9869-62602fcca3e1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:56 crc kubenswrapper[4681]: I1123 07:30:56.017532 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4p6m" event={"ID":"022e2817-2e83-4dfa-9869-62602fcca3e1","Type":"ContainerDied","Data":"c9b3936856b0920e83d3b926c7656bc221138edf24ff0ba2f36f5abcd3a0a3ac"} Nov 23 07:30:56 crc kubenswrapper[4681]: I1123 07:30:56.017624 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4p6m" Nov 23 07:30:56 crc kubenswrapper[4681]: I1123 07:30:56.018833 4681 scope.go:117] "RemoveContainer" containerID="c303bef3e5140049606154b1e86f6bf0d10b89fc19aacef5ef5413a96ccdf32e" Nov 23 07:30:56 crc kubenswrapper[4681]: I1123 07:30:56.061899 4681 scope.go:117] "RemoveContainer" containerID="21d99b4a7736406f447913c7609b281c5e2badfffe35c6aa4e5b4c51af7f18c5" Nov 23 07:30:56 crc kubenswrapper[4681]: I1123 07:30:56.084577 4681 scope.go:117] "RemoveContainer" containerID="8f770ea70e1e952efdae73dc29fedd841edb7c5cad5e6d3b06ace6086a4aa7c2" Nov 23 07:30:56 crc kubenswrapper[4681]: I1123 07:30:56.094125 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w4p6m"] Nov 23 07:30:56 crc kubenswrapper[4681]: I1123 07:30:56.109247 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w4p6m"] Nov 23 07:30:57 crc kubenswrapper[4681]: I1123 07:30:57.262101 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" path="/var/lib/kubelet/pods/022e2817-2e83-4dfa-9869-62602fcca3e1/volumes" Nov 23 07:30:58 crc kubenswrapper[4681]: I1123 07:30:58.801236 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:58 crc kubenswrapper[4681]: I1123 07:30:58.801280 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:58 crc kubenswrapper[4681]: I1123 07:30:58.883504 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:30:59 crc kubenswrapper[4681]: I1123 07:30:59.087284 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:31:00 crc kubenswrapper[4681]: I1123 07:31:00.281562 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zttlm"] Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.063058 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zttlm" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="registry-server" containerID="cri-o://77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c" gracePeriod=2 Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.601056 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.727012 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2chn\" (UniqueName: \"kubernetes.io/projected/2f2c49df-a492-4902-9027-399f1933f5b2-kube-api-access-q2chn\") pod \"2f2c49df-a492-4902-9027-399f1933f5b2\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.727184 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-utilities\") pod \"2f2c49df-a492-4902-9027-399f1933f5b2\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.727309 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-catalog-content\") pod \"2f2c49df-a492-4902-9027-399f1933f5b2\" (UID: \"2f2c49df-a492-4902-9027-399f1933f5b2\") " Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.728756 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-utilities" (OuterVolumeSpecName: "utilities") pod "2f2c49df-a492-4902-9027-399f1933f5b2" (UID: "2f2c49df-a492-4902-9027-399f1933f5b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.741610 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2c49df-a492-4902-9027-399f1933f5b2-kube-api-access-q2chn" (OuterVolumeSpecName: "kube-api-access-q2chn") pod "2f2c49df-a492-4902-9027-399f1933f5b2" (UID: "2f2c49df-a492-4902-9027-399f1933f5b2"). InnerVolumeSpecName "kube-api-access-q2chn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.762625 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f2c49df-a492-4902-9027-399f1933f5b2" (UID: "2f2c49df-a492-4902-9027-399f1933f5b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.829812 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.829949 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f2c49df-a492-4902-9027-399f1933f5b2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:31:01 crc kubenswrapper[4681]: I1123 07:31:01.830008 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2chn\" (UniqueName: \"kubernetes.io/projected/2f2c49df-a492-4902-9027-399f1933f5b2-kube-api-access-q2chn\") on node \"crc\" DevicePath \"\"" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.075697 4681 generic.go:334] "Generic (PLEG): container finished" podID="2f2c49df-a492-4902-9027-399f1933f5b2" containerID="77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c" exitCode=0 Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.075772 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zttlm" event={"ID":"2f2c49df-a492-4902-9027-399f1933f5b2","Type":"ContainerDied","Data":"77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c"} Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.075821 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zttlm" event={"ID":"2f2c49df-a492-4902-9027-399f1933f5b2","Type":"ContainerDied","Data":"8f2f3c7261dd6dc7f9974c9fb8354f40f4a86f9c12954b2553f76d5ab85e61f0"} Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.075817 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zttlm" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.075854 4681 scope.go:117] "RemoveContainer" containerID="77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.107649 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zttlm"] Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.110557 4681 scope.go:117] "RemoveContainer" containerID="7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.114742 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zttlm"] Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.131685 4681 scope.go:117] "RemoveContainer" containerID="c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.166674 4681 scope.go:117] "RemoveContainer" containerID="77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c" Nov 23 07:31:02 crc kubenswrapper[4681]: E1123 07:31:02.171058 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c\": container with ID starting with 77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c not found: ID does not exist" containerID="77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.171113 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c"} err="failed to get container status \"77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c\": rpc error: code = NotFound desc = could not find container \"77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c\": container with ID starting with 77a15d945af3a5b830dc02d827c214f3f1fbc854db80da1e3c4b751b78d9404c not found: ID does not exist" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.171138 4681 scope.go:117] "RemoveContainer" containerID="7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c" Nov 23 07:31:02 crc kubenswrapper[4681]: E1123 07:31:02.172118 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c\": container with ID starting with 7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c not found: ID does not exist" containerID="7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.172226 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c"} err="failed to get container status \"7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c\": rpc error: code = NotFound desc = could not find container \"7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c\": container with ID starting with 7e32ed172b310849a9e774ab2b1a9b4b1d40181ae091dea74fa5e52c0718047c not found: ID does not exist" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.172301 4681 scope.go:117] "RemoveContainer" containerID="c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2" Nov 23 07:31:02 crc kubenswrapper[4681]: E1123 07:31:02.172630 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2\": container with ID starting with c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2 not found: ID does not exist" containerID="c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2" Nov 23 07:31:02 crc kubenswrapper[4681]: I1123 07:31:02.172712 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2"} err="failed to get container status \"c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2\": rpc error: code = NotFound desc = could not find container \"c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2\": container with ID starting with c40c086ec9d8cee540b2d9d0c69bfd76d59e7ef13b63d153ac44645104efd9d2 not found: ID does not exist" Nov 23 07:31:03 crc kubenswrapper[4681]: I1123 07:31:03.262023 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" path="/var/lib/kubelet/pods/2f2c49df-a492-4902-9027-399f1933f5b2/volumes" Nov 23 07:31:42 crc kubenswrapper[4681]: I1123 07:31:42.296812 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:31:42 crc kubenswrapper[4681]: I1123 07:31:42.297470 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:32:12 crc kubenswrapper[4681]: I1123 07:32:12.295763 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:32:12 crc kubenswrapper[4681]: I1123 07:32:12.296285 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.296002 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.296509 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.297593 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.298912 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.300029 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" gracePeriod=600 Nov 23 07:32:42 crc kubenswrapper[4681]: E1123 07:32:42.425282 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.885320 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" exitCode=0 Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.885370 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b"} Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.887134 4681 scope.go:117] "RemoveContainer" containerID="e305db71846595ffb5ef89ffce233280d9d731be32838c5f52ee935532128d59" Nov 23 07:32:42 crc kubenswrapper[4681]: I1123 07:32:42.887958 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:32:42 crc kubenswrapper[4681]: E1123 07:32:42.888280 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:32:57 crc kubenswrapper[4681]: I1123 07:32:57.251973 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:32:57 crc kubenswrapper[4681]: E1123 07:32:57.252552 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:33:12 crc kubenswrapper[4681]: I1123 07:33:12.251393 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:33:12 crc kubenswrapper[4681]: E1123 07:33:12.252023 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:33:27 crc kubenswrapper[4681]: I1123 07:33:27.252187 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:33:27 crc kubenswrapper[4681]: E1123 07:33:27.252930 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:33:42 crc kubenswrapper[4681]: I1123 07:33:42.251735 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:33:42 crc kubenswrapper[4681]: E1123 07:33:42.252356 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:33:54 crc kubenswrapper[4681]: I1123 07:33:54.252132 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:33:54 crc kubenswrapper[4681]: E1123 07:33:54.252752 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:34:08 crc kubenswrapper[4681]: I1123 07:34:08.252827 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:34:08 crc kubenswrapper[4681]: E1123 07:34:08.253552 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:34:20 crc kubenswrapper[4681]: I1123 07:34:20.252392 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:34:20 crc kubenswrapper[4681]: E1123 07:34:20.253022 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:34:33 crc kubenswrapper[4681]: I1123 07:34:33.263098 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:34:33 crc kubenswrapper[4681]: E1123 07:34:33.264024 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:34:45 crc kubenswrapper[4681]: I1123 07:34:45.254437 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:34:45 crc kubenswrapper[4681]: E1123 07:34:45.255016 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:34:57 crc kubenswrapper[4681]: I1123 07:34:57.252557 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:34:57 crc kubenswrapper[4681]: E1123 07:34:57.253792 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:35:10 crc kubenswrapper[4681]: I1123 07:35:10.253955 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:35:10 crc kubenswrapper[4681]: E1123 07:35:10.255417 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:35:21 crc kubenswrapper[4681]: I1123 07:35:21.251674 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:35:21 crc kubenswrapper[4681]: E1123 07:35:21.252789 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:35:33 crc kubenswrapper[4681]: I1123 07:35:33.256915 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:35:33 crc kubenswrapper[4681]: E1123 07:35:33.257441 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:35:46 crc kubenswrapper[4681]: I1123 07:35:46.252715 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:35:46 crc kubenswrapper[4681]: E1123 07:35:46.253708 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:36:00 crc kubenswrapper[4681]: I1123 07:36:00.252070 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:36:00 crc kubenswrapper[4681]: E1123 07:36:00.252791 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:36:13 crc kubenswrapper[4681]: I1123 07:36:13.256287 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:36:13 crc kubenswrapper[4681]: E1123 07:36:13.256980 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:36:25 crc kubenswrapper[4681]: I1123 07:36:25.252010 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:36:25 crc kubenswrapper[4681]: E1123 07:36:25.252896 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:36:39 crc kubenswrapper[4681]: I1123 07:36:39.252407 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:36:39 crc kubenswrapper[4681]: E1123 07:36:39.253550 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:36:50 crc kubenswrapper[4681]: I1123 07:36:50.251383 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:36:50 crc kubenswrapper[4681]: E1123 07:36:50.252067 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:37:05 crc kubenswrapper[4681]: I1123 07:37:05.251944 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:37:05 crc kubenswrapper[4681]: E1123 07:37:05.252667 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:37:16 crc kubenswrapper[4681]: I1123 07:37:16.252239 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:37:16 crc kubenswrapper[4681]: E1123 07:37:16.252876 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:37:31 crc kubenswrapper[4681]: I1123 07:37:31.252055 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:37:31 crc kubenswrapper[4681]: E1123 07:37:31.253000 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:37:46 crc kubenswrapper[4681]: I1123 07:37:46.251955 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:37:47 crc kubenswrapper[4681]: I1123 07:37:47.119025 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"5d2ecf020ba8193f9764eb0866b58fb5b3e63dcb8a74657aae414db1f91128c4"} Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.000184 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5m7lp"] Nov 23 07:40:10 crc kubenswrapper[4681]: E1123 07:40:10.003906 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="extract-content" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.003934 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="extract-content" Nov 23 07:40:10 crc kubenswrapper[4681]: E1123 07:40:10.004647 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="registry-server" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.004665 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="registry-server" Nov 23 07:40:10 crc kubenswrapper[4681]: E1123 07:40:10.004677 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="registry-server" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.004683 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="registry-server" Nov 23 07:40:10 crc kubenswrapper[4681]: E1123 07:40:10.004715 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="extract-content" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.004721 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="extract-content" Nov 23 07:40:10 crc kubenswrapper[4681]: E1123 07:40:10.004738 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="extract-utilities" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.004744 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="extract-utilities" Nov 23 07:40:10 crc kubenswrapper[4681]: E1123 07:40:10.004755 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="extract-utilities" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.004760 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="extract-utilities" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.004972 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2c49df-a492-4902-9027-399f1933f5b2" containerName="registry-server" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.004994 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="022e2817-2e83-4dfa-9869-62602fcca3e1" containerName="registry-server" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.011632 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.047998 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqqd7\" (UniqueName: \"kubernetes.io/projected/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-kube-api-access-bqqd7\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.048143 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-utilities\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.048406 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-catalog-content\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.093407 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5m7lp"] Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.150963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-catalog-content\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.151129 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqqd7\" (UniqueName: \"kubernetes.io/projected/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-kube-api-access-bqqd7\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.151214 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-utilities\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.153384 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-utilities\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.154034 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-catalog-content\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.189306 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqqd7\" (UniqueName: \"kubernetes.io/projected/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-kube-api-access-bqqd7\") pod \"community-operators-5m7lp\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:10 crc kubenswrapper[4681]: I1123 07:40:10.330921 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:11 crc kubenswrapper[4681]: I1123 07:40:11.050577 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5m7lp"] Nov 23 07:40:11 crc kubenswrapper[4681]: I1123 07:40:11.279797 4681 generic.go:334] "Generic (PLEG): container finished" podID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerID="28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b" exitCode=0 Nov 23 07:40:11 crc kubenswrapper[4681]: I1123 07:40:11.279894 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m7lp" event={"ID":"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45","Type":"ContainerDied","Data":"28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b"} Nov 23 07:40:11 crc kubenswrapper[4681]: I1123 07:40:11.280190 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m7lp" event={"ID":"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45","Type":"ContainerStarted","Data":"84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac"} Nov 23 07:40:11 crc kubenswrapper[4681]: I1123 07:40:11.283226 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:40:12 crc kubenswrapper[4681]: I1123 07:40:12.295904 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:40:12 crc kubenswrapper[4681]: I1123 07:40:12.296987 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:40:13 crc kubenswrapper[4681]: I1123 07:40:13.299837 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m7lp" event={"ID":"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45","Type":"ContainerStarted","Data":"d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc"} Nov 23 07:40:14 crc kubenswrapper[4681]: I1123 07:40:14.308253 4681 generic.go:334] "Generic (PLEG): container finished" podID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerID="d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc" exitCode=0 Nov 23 07:40:14 crc kubenswrapper[4681]: I1123 07:40:14.308350 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m7lp" event={"ID":"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45","Type":"ContainerDied","Data":"d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc"} Nov 23 07:40:15 crc kubenswrapper[4681]: I1123 07:40:15.320688 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m7lp" event={"ID":"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45","Type":"ContainerStarted","Data":"82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db"} Nov 23 07:40:15 crc kubenswrapper[4681]: I1123 07:40:15.338725 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5m7lp" podStartSLOduration=2.842160717 podStartE2EDuration="6.338143446s" podCreationTimestamp="2025-11-23 07:40:09 +0000 UTC" firstStartedPulling="2025-11-23 07:40:11.281268659 +0000 UTC m=+3348.350777887" lastFinishedPulling="2025-11-23 07:40:14.777251379 +0000 UTC m=+3351.846760616" observedRunningTime="2025-11-23 07:40:15.335968675 +0000 UTC m=+3352.405477912" watchObservedRunningTime="2025-11-23 07:40:15.338143446 +0000 UTC m=+3352.407652683" Nov 23 07:40:20 crc kubenswrapper[4681]: I1123 07:40:20.331779 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:20 crc kubenswrapper[4681]: I1123 07:40:20.332297 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:20 crc kubenswrapper[4681]: I1123 07:40:20.372933 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:20 crc kubenswrapper[4681]: I1123 07:40:20.411229 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:20 crc kubenswrapper[4681]: I1123 07:40:20.604997 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5m7lp"] Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.379073 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5m7lp" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="registry-server" containerID="cri-o://82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db" gracePeriod=2 Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.859301 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.905754 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqqd7\" (UniqueName: \"kubernetes.io/projected/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-kube-api-access-bqqd7\") pod \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.905811 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-utilities\") pod \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.905913 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-catalog-content\") pod \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\" (UID: \"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45\") " Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.910413 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-utilities" (OuterVolumeSpecName: "utilities") pod "d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" (UID: "d9b0ca3e-e4a5-410c-8a83-2ea92a890b45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.915734 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-kube-api-access-bqqd7" (OuterVolumeSpecName: "kube-api-access-bqqd7") pod "d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" (UID: "d9b0ca3e-e4a5-410c-8a83-2ea92a890b45"). InnerVolumeSpecName "kube-api-access-bqqd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:40:22 crc kubenswrapper[4681]: I1123 07:40:22.946644 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" (UID: "d9b0ca3e-e4a5-410c-8a83-2ea92a890b45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.008961 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqqd7\" (UniqueName: \"kubernetes.io/projected/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-kube-api-access-bqqd7\") on node \"crc\" DevicePath \"\"" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.009001 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.009012 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.387491 4681 generic.go:334] "Generic (PLEG): container finished" podID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerID="82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db" exitCode=0 Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.387532 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m7lp" event={"ID":"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45","Type":"ContainerDied","Data":"82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db"} Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.387552 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m7lp" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.387586 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m7lp" event={"ID":"d9b0ca3e-e4a5-410c-8a83-2ea92a890b45","Type":"ContainerDied","Data":"84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac"} Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.387618 4681 scope.go:117] "RemoveContainer" containerID="82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.410760 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5m7lp"] Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.411068 4681 scope.go:117] "RemoveContainer" containerID="d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.422328 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5m7lp"] Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.426063 4681 scope.go:117] "RemoveContainer" containerID="28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.458808 4681 scope.go:117] "RemoveContainer" containerID="82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db" Nov 23 07:40:23 crc kubenswrapper[4681]: E1123 07:40:23.460785 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db\": container with ID starting with 82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db not found: ID does not exist" containerID="82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.461154 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db"} err="failed to get container status \"82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db\": rpc error: code = NotFound desc = could not find container \"82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db\": container with ID starting with 82d6c114e54fd88ed064afc955badab87a63ac41769759d7c6c2e13aba0b42db not found: ID does not exist" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.461191 4681 scope.go:117] "RemoveContainer" containerID="d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc" Nov 23 07:40:23 crc kubenswrapper[4681]: E1123 07:40:23.461574 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc\": container with ID starting with d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc not found: ID does not exist" containerID="d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.461598 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc"} err="failed to get container status \"d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc\": rpc error: code = NotFound desc = could not find container \"d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc\": container with ID starting with d1bb1ad2fa18a2ea21a25806b1cf1b0f4b72e1341d30143c46b8fec025cacdcc not found: ID does not exist" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.461616 4681 scope.go:117] "RemoveContainer" containerID="28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b" Nov 23 07:40:23 crc kubenswrapper[4681]: E1123 07:40:23.462006 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b\": container with ID starting with 28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b not found: ID does not exist" containerID="28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b" Nov 23 07:40:23 crc kubenswrapper[4681]: I1123 07:40:23.462026 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b"} err="failed to get container status \"28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b\": rpc error: code = NotFound desc = could not find container \"28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b\": container with ID starting with 28a6afa1d610f1e8e38c678bd02e8115db70e890eaa465c70803c8cd51420b9b not found: ID does not exist" Nov 23 07:40:25 crc kubenswrapper[4681]: I1123 07:40:25.259703 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" path="/var/lib/kubelet/pods/d9b0ca3e-e4a5-410c-8a83-2ea92a890b45/volumes" Nov 23 07:40:28 crc kubenswrapper[4681]: E1123 07:40:28.058760 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice/crio-84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac\": RecentStats: unable to find data in memory cache]" Nov 23 07:40:38 crc kubenswrapper[4681]: E1123 07:40:38.259600 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice/crio-84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac\": RecentStats: unable to find data in memory cache]" Nov 23 07:40:42 crc kubenswrapper[4681]: I1123 07:40:42.296104 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:40:42 crc kubenswrapper[4681]: I1123 07:40:42.296681 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:40:48 crc kubenswrapper[4681]: E1123 07:40:48.468529 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice/crio-84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac\": RecentStats: unable to find data in memory cache]" Nov 23 07:40:58 crc kubenswrapper[4681]: E1123 07:40:58.727978 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice/crio-84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac\": RecentStats: unable to find data in memory cache]" Nov 23 07:41:08 crc kubenswrapper[4681]: E1123 07:41:08.935344 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice/crio-84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice\": RecentStats: unable to find data in memory cache]" Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.295963 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.296211 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.296260 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.296942 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d2ecf020ba8193f9764eb0866b58fb5b3e63dcb8a74657aae414db1f91128c4"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.296983 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://5d2ecf020ba8193f9764eb0866b58fb5b3e63dcb8a74657aae414db1f91128c4" gracePeriod=600 Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.738112 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="5d2ecf020ba8193f9764eb0866b58fb5b3e63dcb8a74657aae414db1f91128c4" exitCode=0 Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.738159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"5d2ecf020ba8193f9764eb0866b58fb5b3e63dcb8a74657aae414db1f91128c4"} Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.738193 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899"} Nov 23 07:41:12 crc kubenswrapper[4681]: I1123 07:41:12.738211 4681 scope.go:117] "RemoveContainer" containerID="dfc51fd3ce1905d3f8d12b183e8bc77b3da93474df38f7da71bcae24ac9c701b" Nov 23 07:41:19 crc kubenswrapper[4681]: E1123 07:41:19.152889 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice/crio-84b1e843b9313f569ae06716ed95c3cd51f9348c829dbb62b0990568eea2aaac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b0ca3e_e4a5_410c_8a83_2ea92a890b45.slice\": RecentStats: unable to find data in memory cache]" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.393955 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnk75"] Nov 23 07:41:23 crc kubenswrapper[4681]: E1123 07:41:23.394659 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="extract-utilities" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.394671 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="extract-utilities" Nov 23 07:41:23 crc kubenswrapper[4681]: E1123 07:41:23.394686 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="extract-content" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.394692 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="extract-content" Nov 23 07:41:23 crc kubenswrapper[4681]: E1123 07:41:23.394719 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="registry-server" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.394724 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="registry-server" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.394894 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b0ca3e-e4a5-410c-8a83-2ea92a890b45" containerName="registry-server" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.396070 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.412216 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnk75"] Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.542948 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-catalog-content\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.543002 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-utilities\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.543027 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5drmz\" (UniqueName: \"kubernetes.io/projected/82395db6-c07a-48eb-83a2-4505b7f6c448-kube-api-access-5drmz\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.644783 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5drmz\" (UniqueName: \"kubernetes.io/projected/82395db6-c07a-48eb-83a2-4505b7f6c448-kube-api-access-5drmz\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.645074 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-catalog-content\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.645131 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-utilities\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.645582 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-catalog-content\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.645614 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-utilities\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.660894 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5drmz\" (UniqueName: \"kubernetes.io/projected/82395db6-c07a-48eb-83a2-4505b7f6c448-kube-api-access-5drmz\") pod \"redhat-marketplace-rnk75\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:23 crc kubenswrapper[4681]: I1123 07:41:23.710805 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:24 crc kubenswrapper[4681]: I1123 07:41:24.107104 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnk75"] Nov 23 07:41:24 crc kubenswrapper[4681]: W1123 07:41:24.110809 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82395db6_c07a_48eb_83a2_4505b7f6c448.slice/crio-17eb9f48d662d57dd05a25b351338c709fd0de8909e8423d0d7e519b3d8bb774 WatchSource:0}: Error finding container 17eb9f48d662d57dd05a25b351338c709fd0de8909e8423d0d7e519b3d8bb774: Status 404 returned error can't find the container with id 17eb9f48d662d57dd05a25b351338c709fd0de8909e8423d0d7e519b3d8bb774 Nov 23 07:41:24 crc kubenswrapper[4681]: I1123 07:41:24.843429 4681 generic.go:334] "Generic (PLEG): container finished" podID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerID="42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a" exitCode=0 Nov 23 07:41:24 crc kubenswrapper[4681]: I1123 07:41:24.843613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnk75" event={"ID":"82395db6-c07a-48eb-83a2-4505b7f6c448","Type":"ContainerDied","Data":"42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a"} Nov 23 07:41:24 crc kubenswrapper[4681]: I1123 07:41:24.843884 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnk75" event={"ID":"82395db6-c07a-48eb-83a2-4505b7f6c448","Type":"ContainerStarted","Data":"17eb9f48d662d57dd05a25b351338c709fd0de8909e8423d0d7e519b3d8bb774"} Nov 23 07:41:25 crc kubenswrapper[4681]: I1123 07:41:25.852354 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnk75" event={"ID":"82395db6-c07a-48eb-83a2-4505b7f6c448","Type":"ContainerStarted","Data":"4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966"} Nov 23 07:41:26 crc kubenswrapper[4681]: I1123 07:41:26.860754 4681 generic.go:334] "Generic (PLEG): container finished" podID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerID="4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966" exitCode=0 Nov 23 07:41:26 crc kubenswrapper[4681]: I1123 07:41:26.860801 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnk75" event={"ID":"82395db6-c07a-48eb-83a2-4505b7f6c448","Type":"ContainerDied","Data":"4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966"} Nov 23 07:41:27 crc kubenswrapper[4681]: I1123 07:41:27.869441 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnk75" event={"ID":"82395db6-c07a-48eb-83a2-4505b7f6c448","Type":"ContainerStarted","Data":"1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa"} Nov 23 07:41:27 crc kubenswrapper[4681]: I1123 07:41:27.887178 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnk75" podStartSLOduration=2.410258901 podStartE2EDuration="4.887163859s" podCreationTimestamp="2025-11-23 07:41:23 +0000 UTC" firstStartedPulling="2025-11-23 07:41:24.844914659 +0000 UTC m=+3421.914423896" lastFinishedPulling="2025-11-23 07:41:27.321819616 +0000 UTC m=+3424.391328854" observedRunningTime="2025-11-23 07:41:27.882968738 +0000 UTC m=+3424.952477975" watchObservedRunningTime="2025-11-23 07:41:27.887163859 +0000 UTC m=+3424.956673096" Nov 23 07:41:27 crc kubenswrapper[4681]: I1123 07:41:27.980633 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bqtr9"] Nov 23 07:41:27 crc kubenswrapper[4681]: I1123 07:41:27.982443 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:27 crc kubenswrapper[4681]: I1123 07:41:27.991163 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqtr9"] Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.135005 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h5nv\" (UniqueName: \"kubernetes.io/projected/6ab140a8-224b-401e-9f2c-b1d141b32b7e-kube-api-access-8h5nv\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.135188 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-utilities\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.135398 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-catalog-content\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.236909 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-utilities\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.237200 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-catalog-content\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.237306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h5nv\" (UniqueName: \"kubernetes.io/projected/6ab140a8-224b-401e-9f2c-b1d141b32b7e-kube-api-access-8h5nv\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.237347 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-utilities\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.237552 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-catalog-content\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.253794 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h5nv\" (UniqueName: \"kubernetes.io/projected/6ab140a8-224b-401e-9f2c-b1d141b32b7e-kube-api-access-8h5nv\") pod \"redhat-operators-bqtr9\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.295784 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.716958 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqtr9"] Nov 23 07:41:28 crc kubenswrapper[4681]: W1123 07:41:28.719783 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ab140a8_224b_401e_9f2c_b1d141b32b7e.slice/crio-d2e5f258c6c62267edca27e3879639775c09fc584a6e9203364b68baf7ec422a WatchSource:0}: Error finding container d2e5f258c6c62267edca27e3879639775c09fc584a6e9203364b68baf7ec422a: Status 404 returned error can't find the container with id d2e5f258c6c62267edca27e3879639775c09fc584a6e9203364b68baf7ec422a Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.885388 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerStarted","Data":"771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b"} Nov 23 07:41:28 crc kubenswrapper[4681]: I1123 07:41:28.885518 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerStarted","Data":"d2e5f258c6c62267edca27e3879639775c09fc584a6e9203364b68baf7ec422a"} Nov 23 07:41:29 crc kubenswrapper[4681]: I1123 07:41:29.895917 4681 generic.go:334] "Generic (PLEG): container finished" podID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerID="771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b" exitCode=0 Nov 23 07:41:29 crc kubenswrapper[4681]: I1123 07:41:29.895977 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerDied","Data":"771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b"} Nov 23 07:41:30 crc kubenswrapper[4681]: I1123 07:41:30.907056 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerStarted","Data":"a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f"} Nov 23 07:41:32 crc kubenswrapper[4681]: I1123 07:41:32.933584 4681 generic.go:334] "Generic (PLEG): container finished" podID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerID="a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f" exitCode=0 Nov 23 07:41:32 crc kubenswrapper[4681]: I1123 07:41:32.933676 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerDied","Data":"a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f"} Nov 23 07:41:33 crc kubenswrapper[4681]: I1123 07:41:33.711064 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:33 crc kubenswrapper[4681]: I1123 07:41:33.711549 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:33 crc kubenswrapper[4681]: I1123 07:41:33.770907 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:33 crc kubenswrapper[4681]: I1123 07:41:33.945651 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerStarted","Data":"b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38"} Nov 23 07:41:33 crc kubenswrapper[4681]: I1123 07:41:33.978303 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bqtr9" podStartSLOduration=3.4746044830000002 podStartE2EDuration="6.978268085s" podCreationTimestamp="2025-11-23 07:41:27 +0000 UTC" firstStartedPulling="2025-11-23 07:41:29.90094523 +0000 UTC m=+3426.970454468" lastFinishedPulling="2025-11-23 07:41:33.404608844 +0000 UTC m=+3430.474118070" observedRunningTime="2025-11-23 07:41:33.960989353 +0000 UTC m=+3431.030498589" watchObservedRunningTime="2025-11-23 07:41:33.978268085 +0000 UTC m=+3431.047777323" Nov 23 07:41:34 crc kubenswrapper[4681]: I1123 07:41:34.001590 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:35 crc kubenswrapper[4681]: I1123 07:41:35.977563 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnk75"] Nov 23 07:41:36 crc kubenswrapper[4681]: I1123 07:41:36.976972 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnk75" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="registry-server" containerID="cri-o://1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa" gracePeriod=2 Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.469539 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.578877 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-utilities\") pod \"82395db6-c07a-48eb-83a2-4505b7f6c448\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.579169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-catalog-content\") pod \"82395db6-c07a-48eb-83a2-4505b7f6c448\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.579229 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5drmz\" (UniqueName: \"kubernetes.io/projected/82395db6-c07a-48eb-83a2-4505b7f6c448-kube-api-access-5drmz\") pod \"82395db6-c07a-48eb-83a2-4505b7f6c448\" (UID: \"82395db6-c07a-48eb-83a2-4505b7f6c448\") " Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.579637 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-utilities" (OuterVolumeSpecName: "utilities") pod "82395db6-c07a-48eb-83a2-4505b7f6c448" (UID: "82395db6-c07a-48eb-83a2-4505b7f6c448"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.580119 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.585487 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82395db6-c07a-48eb-83a2-4505b7f6c448" (UID: "82395db6-c07a-48eb-83a2-4505b7f6c448"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.586233 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82395db6-c07a-48eb-83a2-4505b7f6c448-kube-api-access-5drmz" (OuterVolumeSpecName: "kube-api-access-5drmz") pod "82395db6-c07a-48eb-83a2-4505b7f6c448" (UID: "82395db6-c07a-48eb-83a2-4505b7f6c448"). InnerVolumeSpecName "kube-api-access-5drmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.681453 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82395db6-c07a-48eb-83a2-4505b7f6c448-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.681530 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5drmz\" (UniqueName: \"kubernetes.io/projected/82395db6-c07a-48eb-83a2-4505b7f6c448-kube-api-access-5drmz\") on node \"crc\" DevicePath \"\"" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.987386 4681 generic.go:334] "Generic (PLEG): container finished" podID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerID="1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa" exitCode=0 Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.987434 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnk75" event={"ID":"82395db6-c07a-48eb-83a2-4505b7f6c448","Type":"ContainerDied","Data":"1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa"} Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.987500 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnk75" event={"ID":"82395db6-c07a-48eb-83a2-4505b7f6c448","Type":"ContainerDied","Data":"17eb9f48d662d57dd05a25b351338c709fd0de8909e8423d0d7e519b3d8bb774"} Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.987524 4681 scope.go:117] "RemoveContainer" containerID="1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa" Nov 23 07:41:37 crc kubenswrapper[4681]: I1123 07:41:37.987672 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnk75" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.013731 4681 scope.go:117] "RemoveContainer" containerID="4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.029857 4681 scope.go:117] "RemoveContainer" containerID="42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.029998 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnk75"] Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.037935 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnk75"] Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.065047 4681 scope.go:117] "RemoveContainer" containerID="1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa" Nov 23 07:41:38 crc kubenswrapper[4681]: E1123 07:41:38.065418 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa\": container with ID starting with 1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa not found: ID does not exist" containerID="1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.065492 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa"} err="failed to get container status \"1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa\": rpc error: code = NotFound desc = could not find container \"1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa\": container with ID starting with 1cb2e1053dca67b17d9635264c2dd32ff93dcd2726a194c3982dadee2362cafa not found: ID does not exist" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.065530 4681 scope.go:117] "RemoveContainer" containerID="4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966" Nov 23 07:41:38 crc kubenswrapper[4681]: E1123 07:41:38.065804 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966\": container with ID starting with 4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966 not found: ID does not exist" containerID="4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.065839 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966"} err="failed to get container status \"4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966\": rpc error: code = NotFound desc = could not find container \"4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966\": container with ID starting with 4568c82dea5f13b58beb9b7d95882131b5c1ffe05e36bf3eecdd54e633db7966 not found: ID does not exist" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.065857 4681 scope.go:117] "RemoveContainer" containerID="42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a" Nov 23 07:41:38 crc kubenswrapper[4681]: E1123 07:41:38.066249 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a\": container with ID starting with 42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a not found: ID does not exist" containerID="42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.066324 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a"} err="failed to get container status \"42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a\": rpc error: code = NotFound desc = could not find container \"42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a\": container with ID starting with 42b7f82d26fd9b20b0e2acfe1f1bac81656fa448c727c988726f067764e9d79a not found: ID does not exist" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.297565 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:38 crc kubenswrapper[4681]: I1123 07:41:38.297624 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:39 crc kubenswrapper[4681]: I1123 07:41:39.261897 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" path="/var/lib/kubelet/pods/82395db6-c07a-48eb-83a2-4505b7f6c448/volumes" Nov 23 07:41:39 crc kubenswrapper[4681]: I1123 07:41:39.353552 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bqtr9" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="registry-server" probeResult="failure" output=< Nov 23 07:41:39 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:41:39 crc kubenswrapper[4681]: > Nov 23 07:41:48 crc kubenswrapper[4681]: I1123 07:41:48.337741 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:48 crc kubenswrapper[4681]: I1123 07:41:48.376223 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:48 crc kubenswrapper[4681]: I1123 07:41:48.569823 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqtr9"] Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.085767 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bqtr9" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="registry-server" containerID="cri-o://b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38" gracePeriod=2 Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.552382 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.644827 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h5nv\" (UniqueName: \"kubernetes.io/projected/6ab140a8-224b-401e-9f2c-b1d141b32b7e-kube-api-access-8h5nv\") pod \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.645169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-utilities\") pod \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.645253 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-catalog-content\") pod \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\" (UID: \"6ab140a8-224b-401e-9f2c-b1d141b32b7e\") " Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.645638 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-utilities" (OuterVolumeSpecName: "utilities") pod "6ab140a8-224b-401e-9f2c-b1d141b32b7e" (UID: "6ab140a8-224b-401e-9f2c-b1d141b32b7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.646299 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.650948 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab140a8-224b-401e-9f2c-b1d141b32b7e-kube-api-access-8h5nv" (OuterVolumeSpecName: "kube-api-access-8h5nv") pod "6ab140a8-224b-401e-9f2c-b1d141b32b7e" (UID: "6ab140a8-224b-401e-9f2c-b1d141b32b7e"). InnerVolumeSpecName "kube-api-access-8h5nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.709372 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ab140a8-224b-401e-9f2c-b1d141b32b7e" (UID: "6ab140a8-224b-401e-9f2c-b1d141b32b7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.748348 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h5nv\" (UniqueName: \"kubernetes.io/projected/6ab140a8-224b-401e-9f2c-b1d141b32b7e-kube-api-access-8h5nv\") on node \"crc\" DevicePath \"\"" Nov 23 07:41:50 crc kubenswrapper[4681]: I1123 07:41:50.748373 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab140a8-224b-401e-9f2c-b1d141b32b7e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.096844 4681 generic.go:334] "Generic (PLEG): container finished" podID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerID="b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38" exitCode=0 Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.096901 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtr9" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.096923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerDied","Data":"b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38"} Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.098227 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtr9" event={"ID":"6ab140a8-224b-401e-9f2c-b1d141b32b7e","Type":"ContainerDied","Data":"d2e5f258c6c62267edca27e3879639775c09fc584a6e9203364b68baf7ec422a"} Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.098298 4681 scope.go:117] "RemoveContainer" containerID="b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.128880 4681 scope.go:117] "RemoveContainer" containerID="a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.132937 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqtr9"] Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.139772 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bqtr9"] Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.155906 4681 scope.go:117] "RemoveContainer" containerID="771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.204526 4681 scope.go:117] "RemoveContainer" containerID="b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38" Nov 23 07:41:51 crc kubenswrapper[4681]: E1123 07:41:51.205228 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38\": container with ID starting with b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38 not found: ID does not exist" containerID="b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.205262 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38"} err="failed to get container status \"b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38\": rpc error: code = NotFound desc = could not find container \"b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38\": container with ID starting with b7a3b0c2faffbc2de973f65e02d0c060966d46071e2373237050d57e3d24ef38 not found: ID does not exist" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.205287 4681 scope.go:117] "RemoveContainer" containerID="a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f" Nov 23 07:41:51 crc kubenswrapper[4681]: E1123 07:41:51.205719 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f\": container with ID starting with a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f not found: ID does not exist" containerID="a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.205743 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f"} err="failed to get container status \"a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f\": rpc error: code = NotFound desc = could not find container \"a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f\": container with ID starting with a20dcbc90e25f1dba821d4111f75df0def6e221d29ceeda22087276fc336700f not found: ID does not exist" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.205761 4681 scope.go:117] "RemoveContainer" containerID="771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b" Nov 23 07:41:51 crc kubenswrapper[4681]: E1123 07:41:51.206125 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b\": container with ID starting with 771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b not found: ID does not exist" containerID="771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.206145 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b"} err="failed to get container status \"771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b\": rpc error: code = NotFound desc = could not find container \"771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b\": container with ID starting with 771912066c5ee5d10e72888afec3fb419762ed877c6710e729c53c53db14a73b not found: ID does not exist" Nov 23 07:41:51 crc kubenswrapper[4681]: I1123 07:41:51.263240 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" path="/var/lib/kubelet/pods/6ab140a8-224b-401e-9f2c-b1d141b32b7e/volumes" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.475480 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rppjl"] Nov 23 07:42:07 crc kubenswrapper[4681]: E1123 07:42:07.485046 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="extract-utilities" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485065 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="extract-utilities" Nov 23 07:42:07 crc kubenswrapper[4681]: E1123 07:42:07.485080 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="registry-server" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485085 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="registry-server" Nov 23 07:42:07 crc kubenswrapper[4681]: E1123 07:42:07.485101 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="extract-utilities" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485107 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="extract-utilities" Nov 23 07:42:07 crc kubenswrapper[4681]: E1123 07:42:07.485119 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="extract-content" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485124 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="extract-content" Nov 23 07:42:07 crc kubenswrapper[4681]: E1123 07:42:07.485144 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="extract-content" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485151 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="extract-content" Nov 23 07:42:07 crc kubenswrapper[4681]: E1123 07:42:07.485166 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="registry-server" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485171 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="registry-server" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485393 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab140a8-224b-401e-9f2c-b1d141b32b7e" containerName="registry-server" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.485407 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="82395db6-c07a-48eb-83a2-4505b7f6c448" containerName="registry-server" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.486674 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rppjl"] Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.486782 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.564638 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqllg\" (UniqueName: \"kubernetes.io/projected/d93cd1d5-84f9-4032-a44a-7a96cb45e488-kube-api-access-zqllg\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.564678 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-utilities\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.564699 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-catalog-content\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.666550 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqllg\" (UniqueName: \"kubernetes.io/projected/d93cd1d5-84f9-4032-a44a-7a96cb45e488-kube-api-access-zqllg\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.666591 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-utilities\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.666609 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-catalog-content\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.667021 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-catalog-content\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.667509 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-utilities\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.683237 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqllg\" (UniqueName: \"kubernetes.io/projected/d93cd1d5-84f9-4032-a44a-7a96cb45e488-kube-api-access-zqllg\") pod \"certified-operators-rppjl\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:07 crc kubenswrapper[4681]: I1123 07:42:07.808052 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:08 crc kubenswrapper[4681]: I1123 07:42:08.279758 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rppjl"] Nov 23 07:42:09 crc kubenswrapper[4681]: I1123 07:42:09.221938 4681 generic.go:334] "Generic (PLEG): container finished" podID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerID="1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f" exitCode=0 Nov 23 07:42:09 crc kubenswrapper[4681]: I1123 07:42:09.221992 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rppjl" event={"ID":"d93cd1d5-84f9-4032-a44a-7a96cb45e488","Type":"ContainerDied","Data":"1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f"} Nov 23 07:42:09 crc kubenswrapper[4681]: I1123 07:42:09.222191 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rppjl" event={"ID":"d93cd1d5-84f9-4032-a44a-7a96cb45e488","Type":"ContainerStarted","Data":"72215adb4e8c2c172a19970981a6d16512407bb7635e3981f97408d0421fc544"} Nov 23 07:42:10 crc kubenswrapper[4681]: I1123 07:42:10.231352 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rppjl" event={"ID":"d93cd1d5-84f9-4032-a44a-7a96cb45e488","Type":"ContainerStarted","Data":"971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050"} Nov 23 07:42:11 crc kubenswrapper[4681]: I1123 07:42:11.243500 4681 generic.go:334] "Generic (PLEG): container finished" podID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerID="971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050" exitCode=0 Nov 23 07:42:11 crc kubenswrapper[4681]: I1123 07:42:11.243582 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rppjl" event={"ID":"d93cd1d5-84f9-4032-a44a-7a96cb45e488","Type":"ContainerDied","Data":"971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050"} Nov 23 07:42:12 crc kubenswrapper[4681]: I1123 07:42:12.252940 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rppjl" event={"ID":"d93cd1d5-84f9-4032-a44a-7a96cb45e488","Type":"ContainerStarted","Data":"7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3"} Nov 23 07:42:17 crc kubenswrapper[4681]: I1123 07:42:17.809030 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:17 crc kubenswrapper[4681]: I1123 07:42:17.809431 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:17 crc kubenswrapper[4681]: I1123 07:42:17.843721 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:17 crc kubenswrapper[4681]: I1123 07:42:17.863404 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rppjl" podStartSLOduration=8.395669095 podStartE2EDuration="10.863380737s" podCreationTimestamp="2025-11-23 07:42:07 +0000 UTC" firstStartedPulling="2025-11-23 07:42:09.223440103 +0000 UTC m=+3466.292949339" lastFinishedPulling="2025-11-23 07:42:11.691151743 +0000 UTC m=+3468.760660981" observedRunningTime="2025-11-23 07:42:12.26689228 +0000 UTC m=+3469.336401518" watchObservedRunningTime="2025-11-23 07:42:17.863380737 +0000 UTC m=+3474.932889974" Nov 23 07:42:18 crc kubenswrapper[4681]: I1123 07:42:18.329747 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:18 crc kubenswrapper[4681]: I1123 07:42:18.374584 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rppjl"] Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.308926 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rppjl" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="registry-server" containerID="cri-o://7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3" gracePeriod=2 Nov 23 07:42:20 crc kubenswrapper[4681]: E1123 07:42:20.434903 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd93cd1d5_84f9_4032_a44a_7a96cb45e488.slice/crio-conmon-7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd93cd1d5_84f9_4032_a44a_7a96cb45e488.slice/crio-7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.772382 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.892420 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqllg\" (UniqueName: \"kubernetes.io/projected/d93cd1d5-84f9-4032-a44a-7a96cb45e488-kube-api-access-zqllg\") pod \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.892536 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-utilities\") pod \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.892582 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-catalog-content\") pod \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\" (UID: \"d93cd1d5-84f9-4032-a44a-7a96cb45e488\") " Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.893354 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-utilities" (OuterVolumeSpecName: "utilities") pod "d93cd1d5-84f9-4032-a44a-7a96cb45e488" (UID: "d93cd1d5-84f9-4032-a44a-7a96cb45e488"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.905610 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93cd1d5-84f9-4032-a44a-7a96cb45e488-kube-api-access-zqllg" (OuterVolumeSpecName: "kube-api-access-zqllg") pod "d93cd1d5-84f9-4032-a44a-7a96cb45e488" (UID: "d93cd1d5-84f9-4032-a44a-7a96cb45e488"). InnerVolumeSpecName "kube-api-access-zqllg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.931520 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d93cd1d5-84f9-4032-a44a-7a96cb45e488" (UID: "d93cd1d5-84f9-4032-a44a-7a96cb45e488"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.994388 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqllg\" (UniqueName: \"kubernetes.io/projected/d93cd1d5-84f9-4032-a44a-7a96cb45e488-kube-api-access-zqllg\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.994424 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:20 crc kubenswrapper[4681]: I1123 07:42:20.994433 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93cd1d5-84f9-4032-a44a-7a96cb45e488-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.317811 4681 generic.go:334] "Generic (PLEG): container finished" podID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerID="7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3" exitCode=0 Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.317848 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rppjl" event={"ID":"d93cd1d5-84f9-4032-a44a-7a96cb45e488","Type":"ContainerDied","Data":"7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3"} Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.317873 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rppjl" event={"ID":"d93cd1d5-84f9-4032-a44a-7a96cb45e488","Type":"ContainerDied","Data":"72215adb4e8c2c172a19970981a6d16512407bb7635e3981f97408d0421fc544"} Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.317891 4681 scope.go:117] "RemoveContainer" containerID="7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.318559 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rppjl" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.341480 4681 scope.go:117] "RemoveContainer" containerID="971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.343324 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rppjl"] Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.375821 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rppjl"] Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.379253 4681 scope.go:117] "RemoveContainer" containerID="1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.399279 4681 scope.go:117] "RemoveContainer" containerID="7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3" Nov 23 07:42:21 crc kubenswrapper[4681]: E1123 07:42:21.399669 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3\": container with ID starting with 7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3 not found: ID does not exist" containerID="7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.399711 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3"} err="failed to get container status \"7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3\": rpc error: code = NotFound desc = could not find container \"7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3\": container with ID starting with 7eab6cdd567d95481e3b5c9ab5f3d0691cdd35ac74e71043da7530723dcb0ac3 not found: ID does not exist" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.399738 4681 scope.go:117] "RemoveContainer" containerID="971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050" Nov 23 07:42:21 crc kubenswrapper[4681]: E1123 07:42:21.400087 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050\": container with ID starting with 971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050 not found: ID does not exist" containerID="971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.400115 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050"} err="failed to get container status \"971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050\": rpc error: code = NotFound desc = could not find container \"971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050\": container with ID starting with 971812b67c6fc7a9517643eb36891e3b2098a32433689b61068d2094adcd5050 not found: ID does not exist" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.400129 4681 scope.go:117] "RemoveContainer" containerID="1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f" Nov 23 07:42:21 crc kubenswrapper[4681]: E1123 07:42:21.400510 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f\": container with ID starting with 1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f not found: ID does not exist" containerID="1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f" Nov 23 07:42:21 crc kubenswrapper[4681]: I1123 07:42:21.400546 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f"} err="failed to get container status \"1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f\": rpc error: code = NotFound desc = could not find container \"1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f\": container with ID starting with 1b3db159ce3c885dc535621847537605bd9d5f97ac053d1d7c09558998d4548f not found: ID does not exist" Nov 23 07:42:23 crc kubenswrapper[4681]: I1123 07:42:23.260131 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" path="/var/lib/kubelet/pods/d93cd1d5-84f9-4032-a44a-7a96cb45e488/volumes" Nov 23 07:43:12 crc kubenswrapper[4681]: I1123 07:43:12.296253 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:43:12 crc kubenswrapper[4681]: I1123 07:43:12.296544 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:43:42 crc kubenswrapper[4681]: I1123 07:43:42.295182 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:43:42 crc kubenswrapper[4681]: I1123 07:43:42.295582 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:44:12 crc kubenswrapper[4681]: I1123 07:44:12.295335 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:44:12 crc kubenswrapper[4681]: I1123 07:44:12.296045 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:44:12 crc kubenswrapper[4681]: I1123 07:44:12.296110 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:44:12 crc kubenswrapper[4681]: I1123 07:44:12.296663 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:44:12 crc kubenswrapper[4681]: I1123 07:44:12.296729 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" gracePeriod=600 Nov 23 07:44:12 crc kubenswrapper[4681]: E1123 07:44:12.428637 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:44:13 crc kubenswrapper[4681]: I1123 07:44:13.059644 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" exitCode=0 Nov 23 07:44:13 crc kubenswrapper[4681]: I1123 07:44:13.059736 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899"} Nov 23 07:44:13 crc kubenswrapper[4681]: I1123 07:44:13.059998 4681 scope.go:117] "RemoveContainer" containerID="5d2ecf020ba8193f9764eb0866b58fb5b3e63dcb8a74657aae414db1f91128c4" Nov 23 07:44:13 crc kubenswrapper[4681]: I1123 07:44:13.060975 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:44:13 crc kubenswrapper[4681]: E1123 07:44:13.061297 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:44:27 crc kubenswrapper[4681]: I1123 07:44:27.252444 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:44:27 crc kubenswrapper[4681]: E1123 07:44:27.253206 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:44:41 crc kubenswrapper[4681]: I1123 07:44:41.251826 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:44:41 crc kubenswrapper[4681]: E1123 07:44:41.252548 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:44:55 crc kubenswrapper[4681]: I1123 07:44:55.251976 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:44:55 crc kubenswrapper[4681]: E1123 07:44:55.253042 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.210423 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c"] Nov 23 07:45:00 crc kubenswrapper[4681]: E1123 07:45:00.211288 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.211303 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[4681]: E1123 07:45:00.211335 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="extract-content" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.211341 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="extract-content" Nov 23 07:45:00 crc kubenswrapper[4681]: E1123 07:45:00.211360 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="extract-utilities" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.211366 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="extract-utilities" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.211592 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93cd1d5-84f9-4032-a44a-7a96cb45e488" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.212202 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.219159 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.219165 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.238326 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90dcf73b-94ea-4db5-bae9-bc368ade1aee-config-volume\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.238811 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qktj\" (UniqueName: \"kubernetes.io/projected/90dcf73b-94ea-4db5-bae9-bc368ade1aee-kube-api-access-8qktj\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.238896 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90dcf73b-94ea-4db5-bae9-bc368ade1aee-secret-volume\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.312499 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c"] Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.339922 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qktj\" (UniqueName: \"kubernetes.io/projected/90dcf73b-94ea-4db5-bae9-bc368ade1aee-kube-api-access-8qktj\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.339983 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90dcf73b-94ea-4db5-bae9-bc368ade1aee-secret-volume\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.340037 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90dcf73b-94ea-4db5-bae9-bc368ade1aee-config-volume\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.341033 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90dcf73b-94ea-4db5-bae9-bc368ade1aee-config-volume\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.345835 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90dcf73b-94ea-4db5-bae9-bc368ade1aee-secret-volume\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.357137 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qktj\" (UniqueName: \"kubernetes.io/projected/90dcf73b-94ea-4db5-bae9-bc368ade1aee-kube-api-access-8qktj\") pod \"collect-profiles-29398065-c786c\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.530728 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:00 crc kubenswrapper[4681]: I1123 07:45:00.951452 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c"] Nov 23 07:45:01 crc kubenswrapper[4681]: I1123 07:45:01.459380 4681 generic.go:334] "Generic (PLEG): container finished" podID="90dcf73b-94ea-4db5-bae9-bc368ade1aee" containerID="a9abf024ab1a36816512f3a105e69afb615696bae10e6fd2dd360ac2823da541" exitCode=0 Nov 23 07:45:01 crc kubenswrapper[4681]: I1123 07:45:01.459446 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" event={"ID":"90dcf73b-94ea-4db5-bae9-bc368ade1aee","Type":"ContainerDied","Data":"a9abf024ab1a36816512f3a105e69afb615696bae10e6fd2dd360ac2823da541"} Nov 23 07:45:01 crc kubenswrapper[4681]: I1123 07:45:01.459668 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" event={"ID":"90dcf73b-94ea-4db5-bae9-bc368ade1aee","Type":"ContainerStarted","Data":"edba231e594e01e7ab5e2b1e1864e2d458eaf840f872c8f28fdd97e90a1b2854"} Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.714268 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.888827 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qktj\" (UniqueName: \"kubernetes.io/projected/90dcf73b-94ea-4db5-bae9-bc368ade1aee-kube-api-access-8qktj\") pod \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.888928 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90dcf73b-94ea-4db5-bae9-bc368ade1aee-config-volume\") pod \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.888956 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90dcf73b-94ea-4db5-bae9-bc368ade1aee-secret-volume\") pod \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\" (UID: \"90dcf73b-94ea-4db5-bae9-bc368ade1aee\") " Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.889439 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90dcf73b-94ea-4db5-bae9-bc368ade1aee-config-volume" (OuterVolumeSpecName: "config-volume") pod "90dcf73b-94ea-4db5-bae9-bc368ade1aee" (UID: "90dcf73b-94ea-4db5-bae9-bc368ade1aee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.893569 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90dcf73b-94ea-4db5-bae9-bc368ade1aee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90dcf73b-94ea-4db5-bae9-bc368ade1aee" (UID: "90dcf73b-94ea-4db5-bae9-bc368ade1aee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.893849 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90dcf73b-94ea-4db5-bae9-bc368ade1aee-kube-api-access-8qktj" (OuterVolumeSpecName: "kube-api-access-8qktj") pod "90dcf73b-94ea-4db5-bae9-bc368ade1aee" (UID: "90dcf73b-94ea-4db5-bae9-bc368ade1aee"). InnerVolumeSpecName "kube-api-access-8qktj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.990694 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qktj\" (UniqueName: \"kubernetes.io/projected/90dcf73b-94ea-4db5-bae9-bc368ade1aee-kube-api-access-8qktj\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.990739 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90dcf73b-94ea-4db5-bae9-bc368ade1aee-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:02 crc kubenswrapper[4681]: I1123 07:45:02.990749 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90dcf73b-94ea-4db5-bae9-bc368ade1aee-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:03 crc kubenswrapper[4681]: I1123 07:45:03.472902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" event={"ID":"90dcf73b-94ea-4db5-bae9-bc368ade1aee","Type":"ContainerDied","Data":"edba231e594e01e7ab5e2b1e1864e2d458eaf840f872c8f28fdd97e90a1b2854"} Nov 23 07:45:03 crc kubenswrapper[4681]: I1123 07:45:03.473146 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edba231e594e01e7ab5e2b1e1864e2d458eaf840f872c8f28fdd97e90a1b2854" Nov 23 07:45:03 crc kubenswrapper[4681]: I1123 07:45:03.472943 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c" Nov 23 07:45:03 crc kubenswrapper[4681]: I1123 07:45:03.776579 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4"] Nov 23 07:45:03 crc kubenswrapper[4681]: I1123 07:45:03.784158 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-n9kl4"] Nov 23 07:45:05 crc kubenswrapper[4681]: I1123 07:45:05.260131 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c443c21f-e6ff-4f01-a598-554f97be2872" path="/var/lib/kubelet/pods/c443c21f-e6ff-4f01-a598-554f97be2872/volumes" Nov 23 07:45:08 crc kubenswrapper[4681]: I1123 07:45:08.252035 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:45:08 crc kubenswrapper[4681]: E1123 07:45:08.252405 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:45:23 crc kubenswrapper[4681]: I1123 07:45:23.256779 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:45:23 crc kubenswrapper[4681]: E1123 07:45:23.257531 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:45:38 crc kubenswrapper[4681]: I1123 07:45:38.251512 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:45:38 crc kubenswrapper[4681]: E1123 07:45:38.252284 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:45:47 crc kubenswrapper[4681]: I1123 07:45:47.199982 4681 scope.go:117] "RemoveContainer" containerID="ea41bb3d88498c282ec3c619aaa6fbef303da1dcf6b84c6d6580fd27ce9b132d" Nov 23 07:45:51 crc kubenswrapper[4681]: I1123 07:45:51.251930 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:45:51 crc kubenswrapper[4681]: E1123 07:45:51.252475 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:46:03 crc kubenswrapper[4681]: I1123 07:46:03.256319 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:46:03 crc kubenswrapper[4681]: E1123 07:46:03.257060 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:46:15 crc kubenswrapper[4681]: I1123 07:46:15.252178 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:46:15 crc kubenswrapper[4681]: E1123 07:46:15.252752 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:46:27 crc kubenswrapper[4681]: I1123 07:46:27.251725 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:46:27 crc kubenswrapper[4681]: E1123 07:46:27.252427 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:46:39 crc kubenswrapper[4681]: I1123 07:46:39.251856 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:46:39 crc kubenswrapper[4681]: E1123 07:46:39.252442 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:46:52 crc kubenswrapper[4681]: I1123 07:46:52.252688 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:46:52 crc kubenswrapper[4681]: E1123 07:46:52.254099 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:47:03 crc kubenswrapper[4681]: I1123 07:47:03.257391 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:47:03 crc kubenswrapper[4681]: E1123 07:47:03.258345 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:47:17 crc kubenswrapper[4681]: I1123 07:47:17.251623 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:47:17 crc kubenswrapper[4681]: E1123 07:47:17.252377 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:47:32 crc kubenswrapper[4681]: I1123 07:47:32.252218 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:47:32 crc kubenswrapper[4681]: E1123 07:47:32.253275 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:47:45 crc kubenswrapper[4681]: I1123 07:47:45.251284 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:47:45 crc kubenswrapper[4681]: E1123 07:47:45.251961 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:47:58 crc kubenswrapper[4681]: I1123 07:47:58.252525 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:47:58 crc kubenswrapper[4681]: E1123 07:47:58.253210 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:48:11 crc kubenswrapper[4681]: I1123 07:48:11.251550 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:48:11 crc kubenswrapper[4681]: E1123 07:48:11.252099 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:48:23 crc kubenswrapper[4681]: I1123 07:48:23.257879 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:48:23 crc kubenswrapper[4681]: E1123 07:48:23.259082 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:48:38 crc kubenswrapper[4681]: I1123 07:48:38.252248 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:48:38 crc kubenswrapper[4681]: E1123 07:48:38.253318 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:48:49 crc kubenswrapper[4681]: I1123 07:48:49.252152 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:48:49 crc kubenswrapper[4681]: E1123 07:48:49.253472 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:49:04 crc kubenswrapper[4681]: I1123 07:49:04.252512 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:49:04 crc kubenswrapper[4681]: E1123 07:49:04.253410 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:49:15 crc kubenswrapper[4681]: I1123 07:49:15.254903 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:49:16 crc kubenswrapper[4681]: I1123 07:49:16.408218 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"c12eeef471e596348b9adda95653c36e8cfbc6ca2c0cbfdf1e845281e01e25e6"} Nov 23 07:51:42 crc kubenswrapper[4681]: I1123 07:51:42.295982 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:51:42 crc kubenswrapper[4681]: I1123 07:51:42.298009 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.379861 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hhpxc"] Nov 23 07:52:02 crc kubenswrapper[4681]: E1123 07:52:02.389826 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90dcf73b-94ea-4db5-bae9-bc368ade1aee" containerName="collect-profiles" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.389846 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="90dcf73b-94ea-4db5-bae9-bc368ade1aee" containerName="collect-profiles" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.390019 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="90dcf73b-94ea-4db5-bae9-bc368ade1aee" containerName="collect-profiles" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.391325 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhpxc"] Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.391419 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.545482 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-utilities\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.545929 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-catalog-content\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.545975 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj92d\" (UniqueName: \"kubernetes.io/projected/ee732427-9f73-48fe-afbc-8f5d38429184-kube-api-access-hj92d\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.648127 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-utilities\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.648182 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-catalog-content\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.648215 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj92d\" (UniqueName: \"kubernetes.io/projected/ee732427-9f73-48fe-afbc-8f5d38429184-kube-api-access-hj92d\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.648729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-utilities\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.648779 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-catalog-content\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.675383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj92d\" (UniqueName: \"kubernetes.io/projected/ee732427-9f73-48fe-afbc-8f5d38429184-kube-api-access-hj92d\") pod \"redhat-operators-hhpxc\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:02 crc kubenswrapper[4681]: I1123 07:52:02.714910 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:03 crc kubenswrapper[4681]: I1123 07:52:03.229438 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhpxc"] Nov 23 07:52:03 crc kubenswrapper[4681]: I1123 07:52:03.627882 4681 generic.go:334] "Generic (PLEG): container finished" podID="ee732427-9f73-48fe-afbc-8f5d38429184" containerID="e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4" exitCode=0 Nov 23 07:52:03 crc kubenswrapper[4681]: I1123 07:52:03.627935 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhpxc" event={"ID":"ee732427-9f73-48fe-afbc-8f5d38429184","Type":"ContainerDied","Data":"e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4"} Nov 23 07:52:03 crc kubenswrapper[4681]: I1123 07:52:03.627966 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhpxc" event={"ID":"ee732427-9f73-48fe-afbc-8f5d38429184","Type":"ContainerStarted","Data":"c7a6a4cc297380eab1cb625658c641a3036332677d6caca1cef983918ce99ff9"} Nov 23 07:52:03 crc kubenswrapper[4681]: I1123 07:52:03.630553 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:52:04 crc kubenswrapper[4681]: I1123 07:52:04.646781 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhpxc" event={"ID":"ee732427-9f73-48fe-afbc-8f5d38429184","Type":"ContainerStarted","Data":"3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a"} Nov 23 07:52:06 crc kubenswrapper[4681]: I1123 07:52:06.665753 4681 generic.go:334] "Generic (PLEG): container finished" podID="ee732427-9f73-48fe-afbc-8f5d38429184" containerID="3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a" exitCode=0 Nov 23 07:52:06 crc kubenswrapper[4681]: I1123 07:52:06.665859 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhpxc" event={"ID":"ee732427-9f73-48fe-afbc-8f5d38429184","Type":"ContainerDied","Data":"3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a"} Nov 23 07:52:07 crc kubenswrapper[4681]: I1123 07:52:07.675735 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhpxc" event={"ID":"ee732427-9f73-48fe-afbc-8f5d38429184","Type":"ContainerStarted","Data":"69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19"} Nov 23 07:52:07 crc kubenswrapper[4681]: I1123 07:52:07.693568 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hhpxc" podStartSLOduration=2.159911919 podStartE2EDuration="5.693542537s" podCreationTimestamp="2025-11-23 07:52:02 +0000 UTC" firstStartedPulling="2025-11-23 07:52:03.630299568 +0000 UTC m=+4060.699808805" lastFinishedPulling="2025-11-23 07:52:07.163930185 +0000 UTC m=+4064.233439423" observedRunningTime="2025-11-23 07:52:07.691860867 +0000 UTC m=+4064.761370095" watchObservedRunningTime="2025-11-23 07:52:07.693542537 +0000 UTC m=+4064.763051774" Nov 23 07:52:12 crc kubenswrapper[4681]: I1123 07:52:12.295382 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:52:12 crc kubenswrapper[4681]: I1123 07:52:12.295936 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:52:12 crc kubenswrapper[4681]: I1123 07:52:12.715884 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:12 crc kubenswrapper[4681]: I1123 07:52:12.716433 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.680337 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9kxd"] Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.682355 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.686956 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-catalog-content\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.687050 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-utilities\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.687075 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvj6m\" (UniqueName: \"kubernetes.io/projected/ad44ff92-a793-4c48-ad45-691e1c037d5e-kube-api-access-mvj6m\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.700594 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9kxd"] Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.750588 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hhpxc" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="registry-server" probeResult="failure" output=< Nov 23 07:52:13 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 07:52:13 crc kubenswrapper[4681]: > Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.788447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-utilities\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.788508 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvj6m\" (UniqueName: \"kubernetes.io/projected/ad44ff92-a793-4c48-ad45-691e1c037d5e-kube-api-access-mvj6m\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.788616 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-catalog-content\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.788903 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-utilities\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.789007 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-catalog-content\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:13 crc kubenswrapper[4681]: I1123 07:52:13.808341 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvj6m\" (UniqueName: \"kubernetes.io/projected/ad44ff92-a793-4c48-ad45-691e1c037d5e-kube-api-access-mvj6m\") pod \"redhat-marketplace-z9kxd\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:14 crc kubenswrapper[4681]: I1123 07:52:14.004582 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:14 crc kubenswrapper[4681]: I1123 07:52:14.698036 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9kxd"] Nov 23 07:52:14 crc kubenswrapper[4681]: I1123 07:52:14.741064 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9kxd" event={"ID":"ad44ff92-a793-4c48-ad45-691e1c037d5e","Type":"ContainerStarted","Data":"c780ec318662437ef210e109bb5b192104554fd7231b340979fbebd430b5b863"} Nov 23 07:52:15 crc kubenswrapper[4681]: I1123 07:52:15.750034 4681 generic.go:334] "Generic (PLEG): container finished" podID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerID="a298d547b3a594cb2eaa62be791a23059242676bbc4cde660482783e2ee0c164" exitCode=0 Nov 23 07:52:15 crc kubenswrapper[4681]: I1123 07:52:15.750372 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9kxd" event={"ID":"ad44ff92-a793-4c48-ad45-691e1c037d5e","Type":"ContainerDied","Data":"a298d547b3a594cb2eaa62be791a23059242676bbc4cde660482783e2ee0c164"} Nov 23 07:52:16 crc kubenswrapper[4681]: I1123 07:52:16.758675 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9kxd" event={"ID":"ad44ff92-a793-4c48-ad45-691e1c037d5e","Type":"ContainerStarted","Data":"7e8c4da544c148077285531cd24a69232aacae79b69e5f9f920a85b106985150"} Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.701876 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8c6cj"] Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.705150 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.710524 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c6cj"] Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.760218 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr94w\" (UniqueName: \"kubernetes.io/projected/d712be3b-ab3a-4c19-aa98-12fad0516e65-kube-api-access-vr94w\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.760278 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-utilities\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.760346 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-catalog-content\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.774703 4681 generic.go:334] "Generic (PLEG): container finished" podID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerID="7e8c4da544c148077285531cd24a69232aacae79b69e5f9f920a85b106985150" exitCode=0 Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.774735 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9kxd" event={"ID":"ad44ff92-a793-4c48-ad45-691e1c037d5e","Type":"ContainerDied","Data":"7e8c4da544c148077285531cd24a69232aacae79b69e5f9f920a85b106985150"} Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.863016 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr94w\" (UniqueName: \"kubernetes.io/projected/d712be3b-ab3a-4c19-aa98-12fad0516e65-kube-api-access-vr94w\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.863121 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-utilities\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.863208 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-catalog-content\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.864600 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-catalog-content\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.865665 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-utilities\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:17 crc kubenswrapper[4681]: I1123 07:52:17.887222 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr94w\" (UniqueName: \"kubernetes.io/projected/d712be3b-ab3a-4c19-aa98-12fad0516e65-kube-api-access-vr94w\") pod \"certified-operators-8c6cj\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:18 crc kubenswrapper[4681]: I1123 07:52:18.037210 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:18 crc kubenswrapper[4681]: I1123 07:52:18.564683 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c6cj"] Nov 23 07:52:18 crc kubenswrapper[4681]: I1123 07:52:18.783148 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9kxd" event={"ID":"ad44ff92-a793-4c48-ad45-691e1c037d5e","Type":"ContainerStarted","Data":"a5cb03f7f30b745295c17444515db0632134676f0f37a7727390e84cba235f10"} Nov 23 07:52:18 crc kubenswrapper[4681]: I1123 07:52:18.785236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6cj" event={"ID":"d712be3b-ab3a-4c19-aa98-12fad0516e65","Type":"ContainerStarted","Data":"a37bad788be47717e3c543c4d1d67646e78f3a35b7f33b8faa43e577dddf8f66"} Nov 23 07:52:18 crc kubenswrapper[4681]: I1123 07:52:18.804026 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9kxd" podStartSLOduration=3.250385276 podStartE2EDuration="5.804012374s" podCreationTimestamp="2025-11-23 07:52:13 +0000 UTC" firstStartedPulling="2025-11-23 07:52:15.751942109 +0000 UTC m=+4072.821451346" lastFinishedPulling="2025-11-23 07:52:18.305569217 +0000 UTC m=+4075.375078444" observedRunningTime="2025-11-23 07:52:18.799403466 +0000 UTC m=+4075.868912703" watchObservedRunningTime="2025-11-23 07:52:18.804012374 +0000 UTC m=+4075.873521611" Nov 23 07:52:19 crc kubenswrapper[4681]: I1123 07:52:19.792616 4681 generic.go:334] "Generic (PLEG): container finished" podID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerID="2c43ae0453025b586b7904f1f36065e89034056b386740a7a7a1ca4ce146f2dd" exitCode=0 Nov 23 07:52:19 crc kubenswrapper[4681]: I1123 07:52:19.792726 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6cj" event={"ID":"d712be3b-ab3a-4c19-aa98-12fad0516e65","Type":"ContainerDied","Data":"2c43ae0453025b586b7904f1f36065e89034056b386740a7a7a1ca4ce146f2dd"} Nov 23 07:52:20 crc kubenswrapper[4681]: I1123 07:52:20.802028 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6cj" event={"ID":"d712be3b-ab3a-4c19-aa98-12fad0516e65","Type":"ContainerStarted","Data":"aed0f731fd995e0a0fd6d7b496e797371b6704e4573ff74f30b78d77743a3d3b"} Nov 23 07:52:21 crc kubenswrapper[4681]: I1123 07:52:21.813191 4681 generic.go:334] "Generic (PLEG): container finished" podID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerID="aed0f731fd995e0a0fd6d7b496e797371b6704e4573ff74f30b78d77743a3d3b" exitCode=0 Nov 23 07:52:21 crc kubenswrapper[4681]: I1123 07:52:21.813916 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6cj" event={"ID":"d712be3b-ab3a-4c19-aa98-12fad0516e65","Type":"ContainerDied","Data":"aed0f731fd995e0a0fd6d7b496e797371b6704e4573ff74f30b78d77743a3d3b"} Nov 23 07:52:22 crc kubenswrapper[4681]: I1123 07:52:22.754285 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:22 crc kubenswrapper[4681]: I1123 07:52:22.803545 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:22 crc kubenswrapper[4681]: I1123 07:52:22.822393 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6cj" event={"ID":"d712be3b-ab3a-4c19-aa98-12fad0516e65","Type":"ContainerStarted","Data":"4dbe82ec5e471a16954e935ccf3b0fea9c946a392c7bc8406f61ba4a1fda0dd6"} Nov 23 07:52:22 crc kubenswrapper[4681]: I1123 07:52:22.839425 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8c6cj" podStartSLOduration=3.342039441 podStartE2EDuration="5.839408991s" podCreationTimestamp="2025-11-23 07:52:17 +0000 UTC" firstStartedPulling="2025-11-23 07:52:19.794625703 +0000 UTC m=+4076.864134931" lastFinishedPulling="2025-11-23 07:52:22.291995244 +0000 UTC m=+4079.361504481" observedRunningTime="2025-11-23 07:52:22.834263011 +0000 UTC m=+4079.903772248" watchObservedRunningTime="2025-11-23 07:52:22.839408991 +0000 UTC m=+4079.908918228" Nov 23 07:52:24 crc kubenswrapper[4681]: I1123 07:52:24.005244 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:24 crc kubenswrapper[4681]: I1123 07:52:24.005530 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:24 crc kubenswrapper[4681]: I1123 07:52:24.043284 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:24 crc kubenswrapper[4681]: I1123 07:52:24.870622 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.074302 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhpxc"] Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.076398 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hhpxc" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="registry-server" containerID="cri-o://69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19" gracePeriod=2 Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.493065 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.617321 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj92d\" (UniqueName: \"kubernetes.io/projected/ee732427-9f73-48fe-afbc-8f5d38429184-kube-api-access-hj92d\") pod \"ee732427-9f73-48fe-afbc-8f5d38429184\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.617401 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-utilities\") pod \"ee732427-9f73-48fe-afbc-8f5d38429184\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.617486 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-catalog-content\") pod \"ee732427-9f73-48fe-afbc-8f5d38429184\" (UID: \"ee732427-9f73-48fe-afbc-8f5d38429184\") " Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.620553 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-utilities" (OuterVolumeSpecName: "utilities") pod "ee732427-9f73-48fe-afbc-8f5d38429184" (UID: "ee732427-9f73-48fe-afbc-8f5d38429184"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.629082 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee732427-9f73-48fe-afbc-8f5d38429184-kube-api-access-hj92d" (OuterVolumeSpecName: "kube-api-access-hj92d") pod "ee732427-9f73-48fe-afbc-8f5d38429184" (UID: "ee732427-9f73-48fe-afbc-8f5d38429184"). InnerVolumeSpecName "kube-api-access-hj92d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.700373 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee732427-9f73-48fe-afbc-8f5d38429184" (UID: "ee732427-9f73-48fe-afbc-8f5d38429184"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.719812 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.719845 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj92d\" (UniqueName: \"kubernetes.io/projected/ee732427-9f73-48fe-afbc-8f5d38429184-kube-api-access-hj92d\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.719859 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee732427-9f73-48fe-afbc-8f5d38429184-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.844051 4681 generic.go:334] "Generic (PLEG): container finished" podID="ee732427-9f73-48fe-afbc-8f5d38429184" containerID="69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19" exitCode=0 Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.844109 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhpxc" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.844169 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhpxc" event={"ID":"ee732427-9f73-48fe-afbc-8f5d38429184","Type":"ContainerDied","Data":"69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19"} Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.844221 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhpxc" event={"ID":"ee732427-9f73-48fe-afbc-8f5d38429184","Type":"ContainerDied","Data":"c7a6a4cc297380eab1cb625658c641a3036332677d6caca1cef983918ce99ff9"} Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.844241 4681 scope.go:117] "RemoveContainer" containerID="69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.864178 4681 scope.go:117] "RemoveContainer" containerID="3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.872092 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhpxc"] Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.879550 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hhpxc"] Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.895410 4681 scope.go:117] "RemoveContainer" containerID="e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.926040 4681 scope.go:117] "RemoveContainer" containerID="69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19" Nov 23 07:52:25 crc kubenswrapper[4681]: E1123 07:52:25.927857 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19\": container with ID starting with 69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19 not found: ID does not exist" containerID="69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.927910 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19"} err="failed to get container status \"69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19\": rpc error: code = NotFound desc = could not find container \"69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19\": container with ID starting with 69e4acbbc8114e10003ad240e5fb1e3a3ebd10740c7df3a6ecf5188d0cd05d19 not found: ID does not exist" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.927934 4681 scope.go:117] "RemoveContainer" containerID="3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a" Nov 23 07:52:25 crc kubenswrapper[4681]: E1123 07:52:25.928431 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a\": container with ID starting with 3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a not found: ID does not exist" containerID="3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.928495 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a"} err="failed to get container status \"3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a\": rpc error: code = NotFound desc = could not find container \"3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a\": container with ID starting with 3c1be20e03dbd47e715d836e7bf3d463343b491ee3017eabd95958c141823b4a not found: ID does not exist" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.928532 4681 scope.go:117] "RemoveContainer" containerID="e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4" Nov 23 07:52:25 crc kubenswrapper[4681]: E1123 07:52:25.928792 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4\": container with ID starting with e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4 not found: ID does not exist" containerID="e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4" Nov 23 07:52:25 crc kubenswrapper[4681]: I1123 07:52:25.928821 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4"} err="failed to get container status \"e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4\": rpc error: code = NotFound desc = could not find container \"e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4\": container with ID starting with e470c3a88a2e7558ee9ca309eebdff2c4af8f12917f8cebb6c496da0fa0125e4 not found: ID does not exist" Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.260638 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" path="/var/lib/kubelet/pods/ee732427-9f73-48fe-afbc-8f5d38429184/volumes" Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.466923 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9kxd"] Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.467338 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9kxd" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="registry-server" containerID="cri-o://a5cb03f7f30b745295c17444515db0632134676f0f37a7727390e84cba235f10" gracePeriod=2 Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.868735 4681 generic.go:334] "Generic (PLEG): container finished" podID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerID="a5cb03f7f30b745295c17444515db0632134676f0f37a7727390e84cba235f10" exitCode=0 Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.868782 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9kxd" event={"ID":"ad44ff92-a793-4c48-ad45-691e1c037d5e","Type":"ContainerDied","Data":"a5cb03f7f30b745295c17444515db0632134676f0f37a7727390e84cba235f10"} Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.868833 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9kxd" event={"ID":"ad44ff92-a793-4c48-ad45-691e1c037d5e","Type":"ContainerDied","Data":"c780ec318662437ef210e109bb5b192104554fd7231b340979fbebd430b5b863"} Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.868845 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c780ec318662437ef210e109bb5b192104554fd7231b340979fbebd430b5b863" Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.888402 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.965589 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvj6m\" (UniqueName: \"kubernetes.io/projected/ad44ff92-a793-4c48-ad45-691e1c037d5e-kube-api-access-mvj6m\") pod \"ad44ff92-a793-4c48-ad45-691e1c037d5e\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.965717 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-utilities\") pod \"ad44ff92-a793-4c48-ad45-691e1c037d5e\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.965754 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-catalog-content\") pod \"ad44ff92-a793-4c48-ad45-691e1c037d5e\" (UID: \"ad44ff92-a793-4c48-ad45-691e1c037d5e\") " Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.966784 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-utilities" (OuterVolumeSpecName: "utilities") pod "ad44ff92-a793-4c48-ad45-691e1c037d5e" (UID: "ad44ff92-a793-4c48-ad45-691e1c037d5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.970875 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad44ff92-a793-4c48-ad45-691e1c037d5e-kube-api-access-mvj6m" (OuterVolumeSpecName: "kube-api-access-mvj6m") pod "ad44ff92-a793-4c48-ad45-691e1c037d5e" (UID: "ad44ff92-a793-4c48-ad45-691e1c037d5e"). InnerVolumeSpecName "kube-api-access-mvj6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:52:27 crc kubenswrapper[4681]: I1123 07:52:27.980690 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad44ff92-a793-4c48-ad45-691e1c037d5e" (UID: "ad44ff92-a793-4c48-ad45-691e1c037d5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.037755 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.037810 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.067860 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvj6m\" (UniqueName: \"kubernetes.io/projected/ad44ff92-a793-4c48-ad45-691e1c037d5e-kube-api-access-mvj6m\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.067889 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.067900 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad44ff92-a793-4c48-ad45-691e1c037d5e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.082605 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.875270 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9kxd" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.930620 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.935800 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9kxd"] Nov 23 07:52:28 crc kubenswrapper[4681]: I1123 07:52:28.949279 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9kxd"] Nov 23 07:52:29 crc kubenswrapper[4681]: I1123 07:52:29.259627 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" path="/var/lib/kubelet/pods/ad44ff92-a793-4c48-ad45-691e1c037d5e/volumes" Nov 23 07:52:31 crc kubenswrapper[4681]: I1123 07:52:31.665102 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c6cj"] Nov 23 07:52:31 crc kubenswrapper[4681]: I1123 07:52:31.665612 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8c6cj" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="registry-server" containerID="cri-o://4dbe82ec5e471a16954e935ccf3b0fea9c946a392c7bc8406f61ba4a1fda0dd6" gracePeriod=2 Nov 23 07:52:31 crc kubenswrapper[4681]: I1123 07:52:31.901165 4681 generic.go:334] "Generic (PLEG): container finished" podID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerID="4dbe82ec5e471a16954e935ccf3b0fea9c946a392c7bc8406f61ba4a1fda0dd6" exitCode=0 Nov 23 07:52:31 crc kubenswrapper[4681]: I1123 07:52:31.901204 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6cj" event={"ID":"d712be3b-ab3a-4c19-aa98-12fad0516e65","Type":"ContainerDied","Data":"4dbe82ec5e471a16954e935ccf3b0fea9c946a392c7bc8406f61ba4a1fda0dd6"} Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.100525 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.147939 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-catalog-content\") pod \"d712be3b-ab3a-4c19-aa98-12fad0516e65\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.148035 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr94w\" (UniqueName: \"kubernetes.io/projected/d712be3b-ab3a-4c19-aa98-12fad0516e65-kube-api-access-vr94w\") pod \"d712be3b-ab3a-4c19-aa98-12fad0516e65\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.148104 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-utilities\") pod \"d712be3b-ab3a-4c19-aa98-12fad0516e65\" (UID: \"d712be3b-ab3a-4c19-aa98-12fad0516e65\") " Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.148904 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-utilities" (OuterVolumeSpecName: "utilities") pod "d712be3b-ab3a-4c19-aa98-12fad0516e65" (UID: "d712be3b-ab3a-4c19-aa98-12fad0516e65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.152433 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d712be3b-ab3a-4c19-aa98-12fad0516e65-kube-api-access-vr94w" (OuterVolumeSpecName: "kube-api-access-vr94w") pod "d712be3b-ab3a-4c19-aa98-12fad0516e65" (UID: "d712be3b-ab3a-4c19-aa98-12fad0516e65"). InnerVolumeSpecName "kube-api-access-vr94w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.184439 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d712be3b-ab3a-4c19-aa98-12fad0516e65" (UID: "d712be3b-ab3a-4c19-aa98-12fad0516e65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.250730 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.250759 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr94w\" (UniqueName: \"kubernetes.io/projected/d712be3b-ab3a-4c19-aa98-12fad0516e65-kube-api-access-vr94w\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.250772 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d712be3b-ab3a-4c19-aa98-12fad0516e65-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.910131 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6cj" event={"ID":"d712be3b-ab3a-4c19-aa98-12fad0516e65","Type":"ContainerDied","Data":"a37bad788be47717e3c543c4d1d67646e78f3a35b7f33b8faa43e577dddf8f66"} Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.910167 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6cj" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.911155 4681 scope.go:117] "RemoveContainer" containerID="4dbe82ec5e471a16954e935ccf3b0fea9c946a392c7bc8406f61ba4a1fda0dd6" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.937185 4681 scope.go:117] "RemoveContainer" containerID="aed0f731fd995e0a0fd6d7b496e797371b6704e4573ff74f30b78d77743a3d3b" Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.937279 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c6cj"] Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.944999 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8c6cj"] Nov 23 07:52:32 crc kubenswrapper[4681]: I1123 07:52:32.954582 4681 scope.go:117] "RemoveContainer" containerID="2c43ae0453025b586b7904f1f36065e89034056b386740a7a7a1ca4ce146f2dd" Nov 23 07:52:33 crc kubenswrapper[4681]: I1123 07:52:33.263325 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" path="/var/lib/kubelet/pods/d712be3b-ab3a-4c19-aa98-12fad0516e65/volumes" Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.295239 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.295875 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.295917 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.296919 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c12eeef471e596348b9adda95653c36e8cfbc6ca2c0cbfdf1e845281e01e25e6"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.296979 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://c12eeef471e596348b9adda95653c36e8cfbc6ca2c0cbfdf1e845281e01e25e6" gracePeriod=600 Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.992820 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"c12eeef471e596348b9adda95653c36e8cfbc6ca2c0cbfdf1e845281e01e25e6"} Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.993932 4681 scope.go:117] "RemoveContainer" containerID="a4b8f5958195bfaec2b7ee95ad577a1e88aeae4a6622a1aef72875213c225899" Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.992861 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="c12eeef471e596348b9adda95653c36e8cfbc6ca2c0cbfdf1e845281e01e25e6" exitCode=0 Nov 23 07:52:42 crc kubenswrapper[4681]: I1123 07:52:42.994050 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137"} Nov 23 07:54:42 crc kubenswrapper[4681]: I1123 07:54:42.295386 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:54:42 crc kubenswrapper[4681]: I1123 07:54:42.295772 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:55:12 crc kubenswrapper[4681]: I1123 07:55:12.295791 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:55:12 crc kubenswrapper[4681]: I1123 07:55:12.296646 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:55:42 crc kubenswrapper[4681]: I1123 07:55:42.295452 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:55:42 crc kubenswrapper[4681]: I1123 07:55:42.296697 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:55:42 crc kubenswrapper[4681]: I1123 07:55:42.296822 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 07:55:42 crc kubenswrapper[4681]: I1123 07:55:42.297568 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:55:42 crc kubenswrapper[4681]: I1123 07:55:42.297713 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" gracePeriod=600 Nov 23 07:55:42 crc kubenswrapper[4681]: E1123 07:55:42.436492 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:55:43 crc kubenswrapper[4681]: I1123 07:55:43.280181 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" exitCode=0 Nov 23 07:55:43 crc kubenswrapper[4681]: I1123 07:55:43.280341 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137"} Nov 23 07:55:43 crc kubenswrapper[4681]: I1123 07:55:43.280635 4681 scope.go:117] "RemoveContainer" containerID="c12eeef471e596348b9adda95653c36e8cfbc6ca2c0cbfdf1e845281e01e25e6" Nov 23 07:55:43 crc kubenswrapper[4681]: I1123 07:55:43.281111 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:55:43 crc kubenswrapper[4681]: E1123 07:55:43.281361 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:55:56 crc kubenswrapper[4681]: I1123 07:55:56.253772 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:55:56 crc kubenswrapper[4681]: E1123 07:55:56.254498 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.615587 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w9z6x"] Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616479 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="extract-content" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616496 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="extract-content" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616513 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="extract-content" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616519 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="extract-content" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616529 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616535 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616554 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="extract-utilities" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616561 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="extract-utilities" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616581 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="extract-content" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616587 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="extract-content" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616598 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="extract-utilities" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616607 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="extract-utilities" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616635 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="extract-utilities" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616642 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="extract-utilities" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616662 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616668 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: E1123 07:56:00.616680 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616686 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616921 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad44ff92-a793-4c48-ad45-691e1c037d5e" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616936 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d712be3b-ab3a-4c19-aa98-12fad0516e65" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.616946 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee732427-9f73-48fe-afbc-8f5d38429184" containerName="registry-server" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.618833 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.632183 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9z6x"] Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.636401 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-catalog-content\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.636759 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hrvz\" (UniqueName: \"kubernetes.io/projected/1c6f5c26-d726-4e82-9616-b47632cfbfe9-kube-api-access-2hrvz\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.636796 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-utilities\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.739491 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-utilities\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.739536 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hrvz\" (UniqueName: \"kubernetes.io/projected/1c6f5c26-d726-4e82-9616-b47632cfbfe9-kube-api-access-2hrvz\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.739609 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-catalog-content\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.740091 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-utilities\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.740131 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-catalog-content\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.770647 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hrvz\" (UniqueName: \"kubernetes.io/projected/1c6f5c26-d726-4e82-9616-b47632cfbfe9-kube-api-access-2hrvz\") pod \"community-operators-w9z6x\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:00 crc kubenswrapper[4681]: I1123 07:56:00.936151 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:01 crc kubenswrapper[4681]: I1123 07:56:01.355817 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9z6x"] Nov 23 07:56:01 crc kubenswrapper[4681]: I1123 07:56:01.443010 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9z6x" event={"ID":"1c6f5c26-d726-4e82-9616-b47632cfbfe9","Type":"ContainerStarted","Data":"969a1ac525c03033178915552858e61eba78a2ef415dfbc63d0314e0e6e76a0a"} Nov 23 07:56:02 crc kubenswrapper[4681]: I1123 07:56:02.465095 4681 generic.go:334] "Generic (PLEG): container finished" podID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerID="44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5" exitCode=0 Nov 23 07:56:02 crc kubenswrapper[4681]: I1123 07:56:02.465324 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9z6x" event={"ID":"1c6f5c26-d726-4e82-9616-b47632cfbfe9","Type":"ContainerDied","Data":"44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5"} Nov 23 07:56:03 crc kubenswrapper[4681]: I1123 07:56:03.483691 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9z6x" event={"ID":"1c6f5c26-d726-4e82-9616-b47632cfbfe9","Type":"ContainerStarted","Data":"9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10"} Nov 23 07:56:04 crc kubenswrapper[4681]: I1123 07:56:04.499748 4681 generic.go:334] "Generic (PLEG): container finished" podID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerID="9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10" exitCode=0 Nov 23 07:56:04 crc kubenswrapper[4681]: I1123 07:56:04.499803 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9z6x" event={"ID":"1c6f5c26-d726-4e82-9616-b47632cfbfe9","Type":"ContainerDied","Data":"9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10"} Nov 23 07:56:05 crc kubenswrapper[4681]: I1123 07:56:05.512243 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9z6x" event={"ID":"1c6f5c26-d726-4e82-9616-b47632cfbfe9","Type":"ContainerStarted","Data":"50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b"} Nov 23 07:56:05 crc kubenswrapper[4681]: I1123 07:56:05.534556 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w9z6x" podStartSLOduration=2.903626776 podStartE2EDuration="5.534538166s" podCreationTimestamp="2025-11-23 07:56:00 +0000 UTC" firstStartedPulling="2025-11-23 07:56:02.468317278 +0000 UTC m=+4299.537826515" lastFinishedPulling="2025-11-23 07:56:05.099228658 +0000 UTC m=+4302.168737905" observedRunningTime="2025-11-23 07:56:05.531171441 +0000 UTC m=+4302.600680679" watchObservedRunningTime="2025-11-23 07:56:05.534538166 +0000 UTC m=+4302.604047402" Nov 23 07:56:08 crc kubenswrapper[4681]: I1123 07:56:08.253423 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:56:08 crc kubenswrapper[4681]: E1123 07:56:08.254323 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:56:10 crc kubenswrapper[4681]: I1123 07:56:10.937014 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:10 crc kubenswrapper[4681]: I1123 07:56:10.937589 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:10 crc kubenswrapper[4681]: I1123 07:56:10.973608 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:11 crc kubenswrapper[4681]: I1123 07:56:11.601231 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:11 crc kubenswrapper[4681]: I1123 07:56:11.641266 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9z6x"] Nov 23 07:56:13 crc kubenswrapper[4681]: I1123 07:56:13.575635 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w9z6x" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="registry-server" containerID="cri-o://50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b" gracePeriod=2 Nov 23 07:56:13 crc kubenswrapper[4681]: I1123 07:56:13.989188 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.149052 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-catalog-content\") pod \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.149180 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hrvz\" (UniqueName: \"kubernetes.io/projected/1c6f5c26-d726-4e82-9616-b47632cfbfe9-kube-api-access-2hrvz\") pod \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.149573 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-utilities\") pod \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\" (UID: \"1c6f5c26-d726-4e82-9616-b47632cfbfe9\") " Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.151007 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-utilities" (OuterVolumeSpecName: "utilities") pod "1c6f5c26-d726-4e82-9616-b47632cfbfe9" (UID: "1c6f5c26-d726-4e82-9616-b47632cfbfe9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.157675 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6f5c26-d726-4e82-9616-b47632cfbfe9-kube-api-access-2hrvz" (OuterVolumeSpecName: "kube-api-access-2hrvz") pod "1c6f5c26-d726-4e82-9616-b47632cfbfe9" (UID: "1c6f5c26-d726-4e82-9616-b47632cfbfe9"). InnerVolumeSpecName "kube-api-access-2hrvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.193352 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c6f5c26-d726-4e82-9616-b47632cfbfe9" (UID: "1c6f5c26-d726-4e82-9616-b47632cfbfe9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.254635 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.254666 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hrvz\" (UniqueName: \"kubernetes.io/projected/1c6f5c26-d726-4e82-9616-b47632cfbfe9-kube-api-access-2hrvz\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.254678 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6f5c26-d726-4e82-9616-b47632cfbfe9-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.588092 4681 generic.go:334] "Generic (PLEG): container finished" podID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerID="50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b" exitCode=0 Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.588158 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9z6x" event={"ID":"1c6f5c26-d726-4e82-9616-b47632cfbfe9","Type":"ContainerDied","Data":"50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b"} Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.588221 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9z6x" event={"ID":"1c6f5c26-d726-4e82-9616-b47632cfbfe9","Type":"ContainerDied","Data":"969a1ac525c03033178915552858e61eba78a2ef415dfbc63d0314e0e6e76a0a"} Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.588253 4681 scope.go:117] "RemoveContainer" containerID="50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.588448 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9z6x" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.621798 4681 scope.go:117] "RemoveContainer" containerID="9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.624441 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9z6x"] Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.632334 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w9z6x"] Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.639232 4681 scope.go:117] "RemoveContainer" containerID="44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.679491 4681 scope.go:117] "RemoveContainer" containerID="50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b" Nov 23 07:56:14 crc kubenswrapper[4681]: E1123 07:56:14.680127 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b\": container with ID starting with 50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b not found: ID does not exist" containerID="50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.680162 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b"} err="failed to get container status \"50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b\": rpc error: code = NotFound desc = could not find container \"50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b\": container with ID starting with 50c31ac32105870eebf1fd0ac711b63cd31b6d0fe5ca5cd3fe99ce0f0014e65b not found: ID does not exist" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.680188 4681 scope.go:117] "RemoveContainer" containerID="9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10" Nov 23 07:56:14 crc kubenswrapper[4681]: E1123 07:56:14.680499 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10\": container with ID starting with 9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10 not found: ID does not exist" containerID="9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.680545 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10"} err="failed to get container status \"9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10\": rpc error: code = NotFound desc = could not find container \"9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10\": container with ID starting with 9fb6fec738011a181ae0539d548d59cb638826f0d3c6e20a9e92fbd2e40a4d10 not found: ID does not exist" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.680576 4681 scope.go:117] "RemoveContainer" containerID="44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5" Nov 23 07:56:14 crc kubenswrapper[4681]: E1123 07:56:14.680830 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5\": container with ID starting with 44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5 not found: ID does not exist" containerID="44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5" Nov 23 07:56:14 crc kubenswrapper[4681]: I1123 07:56:14.680855 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5"} err="failed to get container status \"44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5\": rpc error: code = NotFound desc = could not find container \"44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5\": container with ID starting with 44f7602ebf12e13468677471e06eb08de6bf70b2dbebca1ed3ee8560c26ce6b5 not found: ID does not exist" Nov 23 07:56:15 crc kubenswrapper[4681]: I1123 07:56:15.263390 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" path="/var/lib/kubelet/pods/1c6f5c26-d726-4e82-9616-b47632cfbfe9/volumes" Nov 23 07:56:15 crc kubenswrapper[4681]: I1123 07:56:15.614071 4681 generic.go:334] "Generic (PLEG): container finished" podID="5c171cbf-074c-4685-88ae-5e1ad59e5423" containerID="2dd2856f735b095a4436ddbfe83075a242f7ee3280a742622f17add988725659" exitCode=1 Nov 23 07:56:15 crc kubenswrapper[4681]: I1123 07:56:15.614121 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"5c171cbf-074c-4685-88ae-5e1ad59e5423","Type":"ContainerDied","Data":"2dd2856f735b095a4436ddbfe83075a242f7ee3280a742622f17add988725659"} Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.129184 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.236309 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Nov 23 07:56:17 crc kubenswrapper[4681]: E1123 07:56:17.236966 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="extract-content" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.236991 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="extract-content" Nov 23 07:56:17 crc kubenswrapper[4681]: E1123 07:56:17.237013 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="extract-utilities" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.237021 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="extract-utilities" Nov 23 07:56:17 crc kubenswrapper[4681]: E1123 07:56:17.237055 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c171cbf-074c-4685-88ae-5e1ad59e5423" containerName="tempest-tests-tempest-tests-runner" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.237061 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c171cbf-074c-4685-88ae-5e1ad59e5423" containerName="tempest-tests-tempest-tests-runner" Nov 23 07:56:17 crc kubenswrapper[4681]: E1123 07:56:17.237073 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="registry-server" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.237080 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="registry-server" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.237269 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6f5c26-d726-4e82-9616-b47632cfbfe9" containerName="registry-server" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.237291 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c171cbf-074c-4685-88ae-5e1ad59e5423" containerName="tempest-tests-tempest-tests-runner" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.238070 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.247741 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.247928 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.273630 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322389 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322527 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-temporary\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322583 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zppq\" (UniqueName: \"kubernetes.io/projected/5c171cbf-074c-4685-88ae-5e1ad59e5423-kube-api-access-6zppq\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322615 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-workdir\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322668 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config-secret\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322710 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322756 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ca-certs\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322812 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-config-data\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.322842 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ssh-key\") pod \"5c171cbf-074c-4685-88ae-5e1ad59e5423\" (UID: \"5c171cbf-074c-4685-88ae-5e1ad59e5423\") " Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.323403 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.324784 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-config-data" (OuterVolumeSpecName: "config-data") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.327016 4681 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.327054 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.338729 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c171cbf-074c-4685-88ae-5e1ad59e5423-kube-api-access-6zppq" (OuterVolumeSpecName: "kube-api-access-6zppq") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "kube-api-access-6zppq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.339078 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.355117 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.358268 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.361011 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.368603 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.372050 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5c171cbf-074c-4685-88ae-5e1ad59e5423" (UID: "5c171cbf-074c-4685-88ae-5e1ad59e5423"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430138 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430218 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430304 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430359 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7ps\" (UniqueName: \"kubernetes.io/projected/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-kube-api-access-bh7ps\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430500 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430594 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430635 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430672 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430697 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430759 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430772 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zppq\" (UniqueName: \"kubernetes.io/projected/5c171cbf-074c-4685-88ae-5e1ad59e5423-kube-api-access-6zppq\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430804 4681 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5c171cbf-074c-4685-88ae-5e1ad59e5423-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430816 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430827 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c171cbf-074c-4685-88ae-5e1ad59e5423-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.430835 4681 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5c171cbf-074c-4685-88ae-5e1ad59e5423-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.456477 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533293 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533390 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533414 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533449 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533521 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533599 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533646 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7ps\" (UniqueName: \"kubernetes.io/projected/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-kube-api-access-bh7ps\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.533792 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.534295 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.534912 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.535050 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.535958 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.537669 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.538691 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.539873 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.549551 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7ps\" (UniqueName: \"kubernetes.io/projected/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-kube-api-access-bh7ps\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.560776 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.633593 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"5c171cbf-074c-4685-88ae-5e1ad59e5423","Type":"ContainerDied","Data":"d9937940ba4752d8dacb67a9faeb1a75ce24397f0724a1d82839bd353f988ea6"} Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.633643 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9937940ba4752d8dacb67a9faeb1a75ce24397f0724a1d82839bd353f988ea6" Nov 23 07:56:17 crc kubenswrapper[4681]: I1123 07:56:17.633707 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 23 07:56:18 crc kubenswrapper[4681]: I1123 07:56:18.079673 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Nov 23 07:56:18 crc kubenswrapper[4681]: W1123 07:56:18.085990 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ff8f0d7_23a9_4d32_bc42_5d2e1e4e6efd.slice/crio-adfef6c45cfefbdbc10642126bc4bb146feaafd826976882c1780a7007548609 WatchSource:0}: Error finding container adfef6c45cfefbdbc10642126bc4bb146feaafd826976882c1780a7007548609: Status 404 returned error can't find the container with id adfef6c45cfefbdbc10642126bc4bb146feaafd826976882c1780a7007548609 Nov 23 07:56:18 crc kubenswrapper[4681]: I1123 07:56:18.642740 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd","Type":"ContainerStarted","Data":"adfef6c45cfefbdbc10642126bc4bb146feaafd826976882c1780a7007548609"} Nov 23 07:56:19 crc kubenswrapper[4681]: I1123 07:56:19.252712 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:56:19 crc kubenswrapper[4681]: E1123 07:56:19.253611 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:56:20 crc kubenswrapper[4681]: I1123 07:56:20.665304 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd","Type":"ContainerStarted","Data":"294aa70a74517a306ac8b61199a69cacee09849adaa18f28534c95c2210129cb"} Nov 23 07:56:20 crc kubenswrapper[4681]: I1123 07:56:20.689957 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=3.689931198 podStartE2EDuration="3.689931198s" podCreationTimestamp="2025-11-23 07:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:56:20.680960277 +0000 UTC m=+4317.750469514" watchObservedRunningTime="2025-11-23 07:56:20.689931198 +0000 UTC m=+4317.759440435" Nov 23 07:56:34 crc kubenswrapper[4681]: I1123 07:56:34.253364 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:56:34 crc kubenswrapper[4681]: E1123 07:56:34.255274 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:56:47 crc kubenswrapper[4681]: I1123 07:56:47.253165 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:56:47 crc kubenswrapper[4681]: E1123 07:56:47.253948 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:57:02 crc kubenswrapper[4681]: I1123 07:57:02.252319 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:57:02 crc kubenswrapper[4681]: E1123 07:57:02.253811 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:57:04 crc kubenswrapper[4681]: E1123 07:57:04.694814 4681 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.82:59028->192.168.26.82:41655: write tcp 192.168.26.82:59028->192.168.26.82:41655: write: broken pipe Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.098395 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8446865c9c-85t9f"] Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.100282 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.118389 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8446865c9c-85t9f"] Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.164453 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-public-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.164562 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-ovndb-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.164629 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smdls\" (UniqueName: \"kubernetes.io/projected/6a2cd6a8-e146-4a72-a522-debbf8b61731-kube-api-access-smdls\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.164658 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-combined-ca-bundle\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.164711 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-internal-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.164736 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-config\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.164854 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-httpd-config\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.266197 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-public-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.266286 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-ovndb-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.266347 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smdls\" (UniqueName: \"kubernetes.io/projected/6a2cd6a8-e146-4a72-a522-debbf8b61731-kube-api-access-smdls\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.266374 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-combined-ca-bundle\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.266423 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-internal-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.266445 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-config\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.266507 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-httpd-config\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.272879 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-httpd-config\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.274162 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-config\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.274165 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-ovndb-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.274728 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-public-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.280412 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-internal-tls-certs\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.281166 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-combined-ca-bundle\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.281578 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smdls\" (UniqueName: \"kubernetes.io/projected/6a2cd6a8-e146-4a72-a522-debbf8b61731-kube-api-access-smdls\") pod \"neutron-8446865c9c-85t9f\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.415256 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:06 crc kubenswrapper[4681]: I1123 07:57:06.939319 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8446865c9c-85t9f"] Nov 23 07:57:07 crc kubenswrapper[4681]: I1123 07:57:07.062591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8446865c9c-85t9f" event={"ID":"6a2cd6a8-e146-4a72-a522-debbf8b61731","Type":"ContainerStarted","Data":"26df9bbe6e996fd9339afb6fa3bb58b05803e562e384a19e42fd28c7aee89afc"} Nov 23 07:57:08 crc kubenswrapper[4681]: I1123 07:57:08.073225 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8446865c9c-85t9f" event={"ID":"6a2cd6a8-e146-4a72-a522-debbf8b61731","Type":"ContainerStarted","Data":"5be9ab07f26650aafc16d69633ae0efc00959486b7a51310dd16c47928854a8e"} Nov 23 07:57:08 crc kubenswrapper[4681]: I1123 07:57:08.073620 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8446865c9c-85t9f" event={"ID":"6a2cd6a8-e146-4a72-a522-debbf8b61731","Type":"ContainerStarted","Data":"d2ce3bcb4a92e86f827d4c5d87ff1fed790729428a181a696cdb2bac550c8b21"} Nov 23 07:57:08 crc kubenswrapper[4681]: I1123 07:57:08.073738 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:08 crc kubenswrapper[4681]: I1123 07:57:08.103292 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8446865c9c-85t9f" podStartSLOduration=2.103273076 podStartE2EDuration="2.103273076s" podCreationTimestamp="2025-11-23 07:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:57:08.094664306 +0000 UTC m=+4365.164173544" watchObservedRunningTime="2025-11-23 07:57:08.103273076 +0000 UTC m=+4365.172782303" Nov 23 07:57:14 crc kubenswrapper[4681]: I1123 07:57:14.252349 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:57:14 crc kubenswrapper[4681]: E1123 07:57:14.253412 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:57:28 crc kubenswrapper[4681]: I1123 07:57:28.251670 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:57:28 crc kubenswrapper[4681]: E1123 07:57:28.252693 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:57:36 crc kubenswrapper[4681]: I1123 07:57:36.426038 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8446865c9c-85t9f" Nov 23 07:57:36 crc kubenswrapper[4681]: I1123 07:57:36.498576 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dd5999bb7-tlr49"] Nov 23 07:57:36 crc kubenswrapper[4681]: I1123 07:57:36.498891 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dd5999bb7-tlr49" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-api" containerID="cri-o://c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a" gracePeriod=30 Nov 23 07:57:36 crc kubenswrapper[4681]: I1123 07:57:36.499105 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dd5999bb7-tlr49" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-httpd" containerID="cri-o://2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec" gracePeriod=30 Nov 23 07:57:37 crc kubenswrapper[4681]: I1123 07:57:37.339166 4681 generic.go:334] "Generic (PLEG): container finished" podID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerID="2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec" exitCode=0 Nov 23 07:57:37 crc kubenswrapper[4681]: I1123 07:57:37.339246 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dd5999bb7-tlr49" event={"ID":"32e94a1b-a08e-4fa2-ae50-f74e280addff","Type":"ContainerDied","Data":"2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec"} Nov 23 07:57:43 crc kubenswrapper[4681]: I1123 07:57:43.257336 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:57:43 crc kubenswrapper[4681]: E1123 07:57:43.258196 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.088352 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.211571 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-httpd-config\") pod \"32e94a1b-a08e-4fa2-ae50-f74e280addff\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.211825 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-ovndb-tls-certs\") pod \"32e94a1b-a08e-4fa2-ae50-f74e280addff\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.211858 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-config\") pod \"32e94a1b-a08e-4fa2-ae50-f74e280addff\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.211893 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-internal-tls-certs\") pod \"32e94a1b-a08e-4fa2-ae50-f74e280addff\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.211915 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-public-tls-certs\") pod \"32e94a1b-a08e-4fa2-ae50-f74e280addff\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.211990 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-combined-ca-bundle\") pod \"32e94a1b-a08e-4fa2-ae50-f74e280addff\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.212128 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nnxr\" (UniqueName: \"kubernetes.io/projected/32e94a1b-a08e-4fa2-ae50-f74e280addff-kube-api-access-4nnxr\") pod \"32e94a1b-a08e-4fa2-ae50-f74e280addff\" (UID: \"32e94a1b-a08e-4fa2-ae50-f74e280addff\") " Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.220723 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "32e94a1b-a08e-4fa2-ae50-f74e280addff" (UID: "32e94a1b-a08e-4fa2-ae50-f74e280addff"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.235155 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e94a1b-a08e-4fa2-ae50-f74e280addff-kube-api-access-4nnxr" (OuterVolumeSpecName: "kube-api-access-4nnxr") pod "32e94a1b-a08e-4fa2-ae50-f74e280addff" (UID: "32e94a1b-a08e-4fa2-ae50-f74e280addff"). InnerVolumeSpecName "kube-api-access-4nnxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.262231 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32e94a1b-a08e-4fa2-ae50-f74e280addff" (UID: "32e94a1b-a08e-4fa2-ae50-f74e280addff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.272678 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "32e94a1b-a08e-4fa2-ae50-f74e280addff" (UID: "32e94a1b-a08e-4fa2-ae50-f74e280addff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.276721 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "32e94a1b-a08e-4fa2-ae50-f74e280addff" (UID: "32e94a1b-a08e-4fa2-ae50-f74e280addff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.281093 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "32e94a1b-a08e-4fa2-ae50-f74e280addff" (UID: "32e94a1b-a08e-4fa2-ae50-f74e280addff"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.283259 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-config" (OuterVolumeSpecName: "config") pod "32e94a1b-a08e-4fa2-ae50-f74e280addff" (UID: "32e94a1b-a08e-4fa2-ae50-f74e280addff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.315290 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nnxr\" (UniqueName: \"kubernetes.io/projected/32e94a1b-a08e-4fa2-ae50-f74e280addff-kube-api-access-4nnxr\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.315324 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.315340 4681 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.315350 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.315362 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.315377 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.315386 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e94a1b-a08e-4fa2-ae50-f74e280addff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.453265 4681 generic.go:334] "Generic (PLEG): container finished" podID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerID="c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a" exitCode=0 Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.453319 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dd5999bb7-tlr49" event={"ID":"32e94a1b-a08e-4fa2-ae50-f74e280addff","Type":"ContainerDied","Data":"c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a"} Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.453390 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dd5999bb7-tlr49" event={"ID":"32e94a1b-a08e-4fa2-ae50-f74e280addff","Type":"ContainerDied","Data":"26592f75b68b54779c10bc3b4fc0c51752147f6c6eee5f694e9f2a7ccbf62030"} Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.453390 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dd5999bb7-tlr49" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.453418 4681 scope.go:117] "RemoveContainer" containerID="2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.487964 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dd5999bb7-tlr49"] Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.493825 4681 scope.go:117] "RemoveContainer" containerID="c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.497143 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7dd5999bb7-tlr49"] Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.518804 4681 scope.go:117] "RemoveContainer" containerID="2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec" Nov 23 07:57:48 crc kubenswrapper[4681]: E1123 07:57:48.519209 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec\": container with ID starting with 2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec not found: ID does not exist" containerID="2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.519252 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec"} err="failed to get container status \"2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec\": rpc error: code = NotFound desc = could not find container \"2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec\": container with ID starting with 2dfc8a83952b4c2521f3dac8e0c9e2bd21f6a85472177432548831f3f1d031ec not found: ID does not exist" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.519282 4681 scope.go:117] "RemoveContainer" containerID="c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a" Nov 23 07:57:48 crc kubenswrapper[4681]: E1123 07:57:48.519667 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a\": container with ID starting with c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a not found: ID does not exist" containerID="c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a" Nov 23 07:57:48 crc kubenswrapper[4681]: I1123 07:57:48.519696 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a"} err="failed to get container status \"c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a\": rpc error: code = NotFound desc = could not find container \"c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a\": container with ID starting with c074d8a970cf6ae87904f279d3935e8e4bca7627af35ee49d23f48f2289a6c5a not found: ID does not exist" Nov 23 07:57:49 crc kubenswrapper[4681]: I1123 07:57:49.260734 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" path="/var/lib/kubelet/pods/32e94a1b-a08e-4fa2-ae50-f74e280addff/volumes" Nov 23 07:57:55 crc kubenswrapper[4681]: I1123 07:57:55.252820 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:57:55 crc kubenswrapper[4681]: E1123 07:57:55.253812 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:58:10 crc kubenswrapper[4681]: I1123 07:58:10.252359 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:58:10 crc kubenswrapper[4681]: E1123 07:58:10.253586 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:58:25 crc kubenswrapper[4681]: I1123 07:58:25.251883 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:58:25 crc kubenswrapper[4681]: E1123 07:58:25.252828 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:58:38 crc kubenswrapper[4681]: I1123 07:58:38.252935 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:58:38 crc kubenswrapper[4681]: E1123 07:58:38.253776 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:58:47 crc kubenswrapper[4681]: I1123 07:58:47.491081 4681 scope.go:117] "RemoveContainer" containerID="a5cb03f7f30b745295c17444515db0632134676f0f37a7727390e84cba235f10" Nov 23 07:58:47 crc kubenswrapper[4681]: I1123 07:58:47.510633 4681 scope.go:117] "RemoveContainer" containerID="a298d547b3a594cb2eaa62be791a23059242676bbc4cde660482783e2ee0c164" Nov 23 07:58:47 crc kubenswrapper[4681]: I1123 07:58:47.526332 4681 scope.go:117] "RemoveContainer" containerID="7e8c4da544c148077285531cd24a69232aacae79b69e5f9f920a85b106985150" Nov 23 07:58:53 crc kubenswrapper[4681]: I1123 07:58:53.257712 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:58:53 crc kubenswrapper[4681]: E1123 07:58:53.258379 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:59:04 crc kubenswrapper[4681]: I1123 07:59:04.252122 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:59:04 crc kubenswrapper[4681]: E1123 07:59:04.252780 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:59:18 crc kubenswrapper[4681]: I1123 07:59:18.252755 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:59:18 crc kubenswrapper[4681]: E1123 07:59:18.253756 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:59:29 crc kubenswrapper[4681]: I1123 07:59:29.252151 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:59:29 crc kubenswrapper[4681]: E1123 07:59:29.253167 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:59:43 crc kubenswrapper[4681]: I1123 07:59:43.256427 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:59:43 crc kubenswrapper[4681]: E1123 07:59:43.257034 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 07:59:55 crc kubenswrapper[4681]: I1123 07:59:55.252353 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 07:59:55 crc kubenswrapper[4681]: E1123 07:59:55.252937 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.141305 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc"] Nov 23 08:00:00 crc kubenswrapper[4681]: E1123 08:00:00.142815 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-api" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.143027 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-api" Nov 23 08:00:00 crc kubenswrapper[4681]: E1123 08:00:00.143104 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-httpd" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.143157 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-httpd" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.143363 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-api" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.143439 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e94a1b-a08e-4fa2-ae50-f74e280addff" containerName="neutron-httpd" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.144073 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.146352 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.150335 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.152667 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc"] Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.303013 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/301175bb-1bb6-45ec-99b2-23bd3f390cfb-secret-volume\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.303051 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/301175bb-1bb6-45ec-99b2-23bd3f390cfb-config-volume\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.303100 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjhl\" (UniqueName: \"kubernetes.io/projected/301175bb-1bb6-45ec-99b2-23bd3f390cfb-kube-api-access-kcjhl\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.404584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/301175bb-1bb6-45ec-99b2-23bd3f390cfb-secret-volume\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.404646 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/301175bb-1bb6-45ec-99b2-23bd3f390cfb-config-volume\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.404700 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjhl\" (UniqueName: \"kubernetes.io/projected/301175bb-1bb6-45ec-99b2-23bd3f390cfb-kube-api-access-kcjhl\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.405902 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/301175bb-1bb6-45ec-99b2-23bd3f390cfb-config-volume\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.411208 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/301175bb-1bb6-45ec-99b2-23bd3f390cfb-secret-volume\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.418905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjhl\" (UniqueName: \"kubernetes.io/projected/301175bb-1bb6-45ec-99b2-23bd3f390cfb-kube-api-access-kcjhl\") pod \"collect-profiles-29398080-z29fc\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.460373 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:00 crc kubenswrapper[4681]: I1123 08:00:00.850078 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc"] Nov 23 08:00:01 crc kubenswrapper[4681]: I1123 08:00:01.543729 4681 generic.go:334] "Generic (PLEG): container finished" podID="301175bb-1bb6-45ec-99b2-23bd3f390cfb" containerID="e2d181b9b99e5eeef3a6c8e47fc55f3c833b215435b45520612b4b98ff58d9c4" exitCode=0 Nov 23 08:00:01 crc kubenswrapper[4681]: I1123 08:00:01.543816 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" event={"ID":"301175bb-1bb6-45ec-99b2-23bd3f390cfb","Type":"ContainerDied","Data":"e2d181b9b99e5eeef3a6c8e47fc55f3c833b215435b45520612b4b98ff58d9c4"} Nov 23 08:00:01 crc kubenswrapper[4681]: I1123 08:00:01.543946 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" event={"ID":"301175bb-1bb6-45ec-99b2-23bd3f390cfb","Type":"ContainerStarted","Data":"2012b09ecbc6df377c4231d9cf65c07cee4f88569c856d173ccd1f1698d77f48"} Nov 23 08:00:02 crc kubenswrapper[4681]: I1123 08:00:02.847583 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:02 crc kubenswrapper[4681]: I1123 08:00:02.948929 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/301175bb-1bb6-45ec-99b2-23bd3f390cfb-secret-volume\") pod \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " Nov 23 08:00:02 crc kubenswrapper[4681]: I1123 08:00:02.949182 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjhl\" (UniqueName: \"kubernetes.io/projected/301175bb-1bb6-45ec-99b2-23bd3f390cfb-kube-api-access-kcjhl\") pod \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " Nov 23 08:00:02 crc kubenswrapper[4681]: I1123 08:00:02.949289 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/301175bb-1bb6-45ec-99b2-23bd3f390cfb-config-volume\") pod \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\" (UID: \"301175bb-1bb6-45ec-99b2-23bd3f390cfb\") " Nov 23 08:00:02 crc kubenswrapper[4681]: I1123 08:00:02.950256 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301175bb-1bb6-45ec-99b2-23bd3f390cfb-config-volume" (OuterVolumeSpecName: "config-volume") pod "301175bb-1bb6-45ec-99b2-23bd3f390cfb" (UID: "301175bb-1bb6-45ec-99b2-23bd3f390cfb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:00:02 crc kubenswrapper[4681]: I1123 08:00:02.953665 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301175bb-1bb6-45ec-99b2-23bd3f390cfb-kube-api-access-kcjhl" (OuterVolumeSpecName: "kube-api-access-kcjhl") pod "301175bb-1bb6-45ec-99b2-23bd3f390cfb" (UID: "301175bb-1bb6-45ec-99b2-23bd3f390cfb"). InnerVolumeSpecName "kube-api-access-kcjhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:00:02 crc kubenswrapper[4681]: I1123 08:00:02.954795 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301175bb-1bb6-45ec-99b2-23bd3f390cfb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "301175bb-1bb6-45ec-99b2-23bd3f390cfb" (UID: "301175bb-1bb6-45ec-99b2-23bd3f390cfb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.051954 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/301175bb-1bb6-45ec-99b2-23bd3f390cfb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.052115 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjhl\" (UniqueName: \"kubernetes.io/projected/301175bb-1bb6-45ec-99b2-23bd3f390cfb-kube-api-access-kcjhl\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.052174 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/301175bb-1bb6-45ec-99b2-23bd3f390cfb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.558143 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" event={"ID":"301175bb-1bb6-45ec-99b2-23bd3f390cfb","Type":"ContainerDied","Data":"2012b09ecbc6df377c4231d9cf65c07cee4f88569c856d173ccd1f1698d77f48"} Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.558326 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2012b09ecbc6df377c4231d9cf65c07cee4f88569c856d173ccd1f1698d77f48" Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.558197 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc" Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.908366 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd"] Nov 23 08:00:03 crc kubenswrapper[4681]: I1123 08:00:03.914281 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-g8rvd"] Nov 23 08:00:05 crc kubenswrapper[4681]: I1123 08:00:05.261644 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37bcdd31-b53b-4450-9d03-3ff00ed926f7" path="/var/lib/kubelet/pods/37bcdd31-b53b-4450-9d03-3ff00ed926f7/volumes" Nov 23 08:00:06 crc kubenswrapper[4681]: I1123 08:00:06.252530 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 08:00:06 crc kubenswrapper[4681]: E1123 08:00:06.252936 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:00:17 crc kubenswrapper[4681]: I1123 08:00:17.252031 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 08:00:17 crc kubenswrapper[4681]: E1123 08:00:17.252801 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:00:28 crc kubenswrapper[4681]: I1123 08:00:28.252082 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 08:00:28 crc kubenswrapper[4681]: E1123 08:00:28.252656 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:00:40 crc kubenswrapper[4681]: I1123 08:00:40.252505 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 08:00:40 crc kubenswrapper[4681]: E1123 08:00:40.253058 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:00:47 crc kubenswrapper[4681]: I1123 08:00:47.611917 4681 scope.go:117] "RemoveContainer" containerID="90cd91c064fd86bafa8c6a5225439ab396dcd9188adefcef8c8b3b5feb42594f" Nov 23 08:00:52 crc kubenswrapper[4681]: I1123 08:00:52.251642 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 08:00:52 crc kubenswrapper[4681]: I1123 08:00:52.928166 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"a868803893c24a99ca133b07873d005f27c84ed164c57f36e111486533a2a1a7"} Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.134979 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29398081-4v8mh"] Nov 23 08:01:00 crc kubenswrapper[4681]: E1123 08:01:00.135802 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301175bb-1bb6-45ec-99b2-23bd3f390cfb" containerName="collect-profiles" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.135813 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="301175bb-1bb6-45ec-99b2-23bd3f390cfb" containerName="collect-profiles" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.135972 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="301175bb-1bb6-45ec-99b2-23bd3f390cfb" containerName="collect-profiles" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.136510 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.144665 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398081-4v8mh"] Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.322654 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-combined-ca-bundle\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.322959 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-fernet-keys\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.323020 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t666c\" (UniqueName: \"kubernetes.io/projected/3b14ea88-47d6-4662-824c-241628fb8c5d-kube-api-access-t666c\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.323056 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-config-data\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.424429 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t666c\" (UniqueName: \"kubernetes.io/projected/3b14ea88-47d6-4662-824c-241628fb8c5d-kube-api-access-t666c\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.424487 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-config-data\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.424537 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-combined-ca-bundle\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.424685 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-fernet-keys\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.430241 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-combined-ca-bundle\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.430565 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-config-data\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.431016 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-fernet-keys\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.438208 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t666c\" (UniqueName: \"kubernetes.io/projected/3b14ea88-47d6-4662-824c-241628fb8c5d-kube-api-access-t666c\") pod \"keystone-cron-29398081-4v8mh\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.450200 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.855069 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398081-4v8mh"] Nov 23 08:01:00 crc kubenswrapper[4681]: I1123 08:01:00.984964 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398081-4v8mh" event={"ID":"3b14ea88-47d6-4662-824c-241628fb8c5d","Type":"ContainerStarted","Data":"7d46c35e1ace433d029430674bb2e408629fc96a56d4e7e77fab8c94a5e3221e"} Nov 23 08:01:01 crc kubenswrapper[4681]: I1123 08:01:01.992913 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398081-4v8mh" event={"ID":"3b14ea88-47d6-4662-824c-241628fb8c5d","Type":"ContainerStarted","Data":"952156d91df9338082f65c307c238cad671e09f649f46abc3fac975a1666c03a"} Nov 23 08:01:02 crc kubenswrapper[4681]: I1123 08:01:02.009165 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29398081-4v8mh" podStartSLOduration=2.009151832 podStartE2EDuration="2.009151832s" podCreationTimestamp="2025-11-23 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:01:02.006756668 +0000 UTC m=+4599.076265905" watchObservedRunningTime="2025-11-23 08:01:02.009151832 +0000 UTC m=+4599.078661068" Nov 23 08:01:04 crc kubenswrapper[4681]: I1123 08:01:04.007051 4681 generic.go:334] "Generic (PLEG): container finished" podID="3b14ea88-47d6-4662-824c-241628fb8c5d" containerID="952156d91df9338082f65c307c238cad671e09f649f46abc3fac975a1666c03a" exitCode=0 Nov 23 08:01:04 crc kubenswrapper[4681]: I1123 08:01:04.007126 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398081-4v8mh" event={"ID":"3b14ea88-47d6-4662-824c-241628fb8c5d","Type":"ContainerDied","Data":"952156d91df9338082f65c307c238cad671e09f649f46abc3fac975a1666c03a"} Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.289063 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.414698 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-combined-ca-bundle\") pod \"3b14ea88-47d6-4662-824c-241628fb8c5d\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.414734 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t666c\" (UniqueName: \"kubernetes.io/projected/3b14ea88-47d6-4662-824c-241628fb8c5d-kube-api-access-t666c\") pod \"3b14ea88-47d6-4662-824c-241628fb8c5d\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.414768 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-config-data\") pod \"3b14ea88-47d6-4662-824c-241628fb8c5d\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.414798 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-fernet-keys\") pod \"3b14ea88-47d6-4662-824c-241628fb8c5d\" (UID: \"3b14ea88-47d6-4662-824c-241628fb8c5d\") " Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.425261 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b14ea88-47d6-4662-824c-241628fb8c5d-kube-api-access-t666c" (OuterVolumeSpecName: "kube-api-access-t666c") pod "3b14ea88-47d6-4662-824c-241628fb8c5d" (UID: "3b14ea88-47d6-4662-824c-241628fb8c5d"). InnerVolumeSpecName "kube-api-access-t666c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.435054 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b14ea88-47d6-4662-824c-241628fb8c5d" (UID: "3b14ea88-47d6-4662-824c-241628fb8c5d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.437510 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b14ea88-47d6-4662-824c-241628fb8c5d" (UID: "3b14ea88-47d6-4662-824c-241628fb8c5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.476252 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-config-data" (OuterVolumeSpecName: "config-data") pod "3b14ea88-47d6-4662-824c-241628fb8c5d" (UID: "3b14ea88-47d6-4662-824c-241628fb8c5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.517176 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.517203 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t666c\" (UniqueName: \"kubernetes.io/projected/3b14ea88-47d6-4662-824c-241628fb8c5d-kube-api-access-t666c\") on node \"crc\" DevicePath \"\"" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.517217 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:01:05 crc kubenswrapper[4681]: I1123 08:01:05.517225 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b14ea88-47d6-4662-824c-241628fb8c5d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:01:06 crc kubenswrapper[4681]: I1123 08:01:06.024023 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398081-4v8mh" event={"ID":"3b14ea88-47d6-4662-824c-241628fb8c5d","Type":"ContainerDied","Data":"7d46c35e1ace433d029430674bb2e408629fc96a56d4e7e77fab8c94a5e3221e"} Nov 23 08:01:06 crc kubenswrapper[4681]: I1123 08:01:06.024235 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398081-4v8mh" Nov 23 08:01:06 crc kubenswrapper[4681]: I1123 08:01:06.024239 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d46c35e1ace433d029430674bb2e408629fc96a56d4e7e77fab8c94a5e3221e" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.716663 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6mm9g"] Nov 23 08:02:39 crc kubenswrapper[4681]: E1123 08:02:39.717346 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b14ea88-47d6-4662-824c-241628fb8c5d" containerName="keystone-cron" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.717359 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b14ea88-47d6-4662-824c-241628fb8c5d" containerName="keystone-cron" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.717525 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b14ea88-47d6-4662-824c-241628fb8c5d" containerName="keystone-cron" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.718653 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.729543 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mm9g"] Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.886357 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-utilities\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.886673 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-catalog-content\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.886860 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8sr\" (UniqueName: \"kubernetes.io/projected/9b684cbf-5d28-42cb-a454-991479bbd898-kube-api-access-kf8sr\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.988564 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf8sr\" (UniqueName: \"kubernetes.io/projected/9b684cbf-5d28-42cb-a454-991479bbd898-kube-api-access-kf8sr\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.988680 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-utilities\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.988723 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-catalog-content\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.989087 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-utilities\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:39 crc kubenswrapper[4681]: I1123 08:02:39.989109 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-catalog-content\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:40 crc kubenswrapper[4681]: I1123 08:02:40.192817 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf8sr\" (UniqueName: \"kubernetes.io/projected/9b684cbf-5d28-42cb-a454-991479bbd898-kube-api-access-kf8sr\") pod \"redhat-marketplace-6mm9g\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:40 crc kubenswrapper[4681]: I1123 08:02:40.332669 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:40 crc kubenswrapper[4681]: I1123 08:02:40.772118 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mm9g"] Nov 23 08:02:41 crc kubenswrapper[4681]: I1123 08:02:41.682438 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b684cbf-5d28-42cb-a454-991479bbd898" containerID="30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d" exitCode=0 Nov 23 08:02:41 crc kubenswrapper[4681]: I1123 08:02:41.682506 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mm9g" event={"ID":"9b684cbf-5d28-42cb-a454-991479bbd898","Type":"ContainerDied","Data":"30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d"} Nov 23 08:02:41 crc kubenswrapper[4681]: I1123 08:02:41.682688 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mm9g" event={"ID":"9b684cbf-5d28-42cb-a454-991479bbd898","Type":"ContainerStarted","Data":"3c10552aa7ee371e350e44d9af218baa964c48eb529c5d8a063c32f2565fa8be"} Nov 23 08:02:41 crc kubenswrapper[4681]: I1123 08:02:41.684026 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:02:42 crc kubenswrapper[4681]: I1123 08:02:42.691668 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mm9g" event={"ID":"9b684cbf-5d28-42cb-a454-991479bbd898","Type":"ContainerStarted","Data":"636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9"} Nov 23 08:02:43 crc kubenswrapper[4681]: I1123 08:02:43.699347 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b684cbf-5d28-42cb-a454-991479bbd898" containerID="636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9" exitCode=0 Nov 23 08:02:43 crc kubenswrapper[4681]: I1123 08:02:43.699388 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mm9g" event={"ID":"9b684cbf-5d28-42cb-a454-991479bbd898","Type":"ContainerDied","Data":"636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9"} Nov 23 08:02:44 crc kubenswrapper[4681]: I1123 08:02:44.707973 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mm9g" event={"ID":"9b684cbf-5d28-42cb-a454-991479bbd898","Type":"ContainerStarted","Data":"de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161"} Nov 23 08:02:44 crc kubenswrapper[4681]: I1123 08:02:44.725588 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6mm9g" podStartSLOduration=3.236091812 podStartE2EDuration="5.725571s" podCreationTimestamp="2025-11-23 08:02:39 +0000 UTC" firstStartedPulling="2025-11-23 08:02:41.683798633 +0000 UTC m=+4698.753307870" lastFinishedPulling="2025-11-23 08:02:44.173277822 +0000 UTC m=+4701.242787058" observedRunningTime="2025-11-23 08:02:44.719350149 +0000 UTC m=+4701.788859387" watchObservedRunningTime="2025-11-23 08:02:44.725571 +0000 UTC m=+4701.795080236" Nov 23 08:02:50 crc kubenswrapper[4681]: I1123 08:02:50.333675 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:50 crc kubenswrapper[4681]: I1123 08:02:50.334091 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:50 crc kubenswrapper[4681]: I1123 08:02:50.429316 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:50 crc kubenswrapper[4681]: I1123 08:02:50.792766 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:50 crc kubenswrapper[4681]: I1123 08:02:50.838438 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mm9g"] Nov 23 08:02:52 crc kubenswrapper[4681]: I1123 08:02:52.765447 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6mm9g" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="registry-server" containerID="cri-o://de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161" gracePeriod=2 Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.220811 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.243161 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-catalog-content\") pod \"9b684cbf-5d28-42cb-a454-991479bbd898\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.243283 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-utilities\") pod \"9b684cbf-5d28-42cb-a454-991479bbd898\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.243308 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf8sr\" (UniqueName: \"kubernetes.io/projected/9b684cbf-5d28-42cb-a454-991479bbd898-kube-api-access-kf8sr\") pod \"9b684cbf-5d28-42cb-a454-991479bbd898\" (UID: \"9b684cbf-5d28-42cb-a454-991479bbd898\") " Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.243958 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-utilities" (OuterVolumeSpecName: "utilities") pod "9b684cbf-5d28-42cb-a454-991479bbd898" (UID: "9b684cbf-5d28-42cb-a454-991479bbd898"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.256453 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b684cbf-5d28-42cb-a454-991479bbd898-kube-api-access-kf8sr" (OuterVolumeSpecName: "kube-api-access-kf8sr") pod "9b684cbf-5d28-42cb-a454-991479bbd898" (UID: "9b684cbf-5d28-42cb-a454-991479bbd898"). InnerVolumeSpecName "kube-api-access-kf8sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.256999 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b684cbf-5d28-42cb-a454-991479bbd898" (UID: "9b684cbf-5d28-42cb-a454-991479bbd898"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.345990 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.346024 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b684cbf-5d28-42cb-a454-991479bbd898-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.346035 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf8sr\" (UniqueName: \"kubernetes.io/projected/9b684cbf-5d28-42cb-a454-991479bbd898-kube-api-access-kf8sr\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.773649 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b684cbf-5d28-42cb-a454-991479bbd898" containerID="de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161" exitCode=0 Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.773842 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mm9g" event={"ID":"9b684cbf-5d28-42cb-a454-991479bbd898","Type":"ContainerDied","Data":"de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161"} Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.774498 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mm9g" event={"ID":"9b684cbf-5d28-42cb-a454-991479bbd898","Type":"ContainerDied","Data":"3c10552aa7ee371e350e44d9af218baa964c48eb529c5d8a063c32f2565fa8be"} Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.774528 4681 scope.go:117] "RemoveContainer" containerID="de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.773924 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mm9g" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.793028 4681 scope.go:117] "RemoveContainer" containerID="636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.795592 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mm9g"] Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.803107 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mm9g"] Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.811587 4681 scope.go:117] "RemoveContainer" containerID="30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.842394 4681 scope.go:117] "RemoveContainer" containerID="de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161" Nov 23 08:02:53 crc kubenswrapper[4681]: E1123 08:02:53.842727 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161\": container with ID starting with de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161 not found: ID does not exist" containerID="de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.842766 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161"} err="failed to get container status \"de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161\": rpc error: code = NotFound desc = could not find container \"de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161\": container with ID starting with de40a0959526ca03d45949ba5c41dd836aaf870bebfcf98c3bd287fc6943e161 not found: ID does not exist" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.842793 4681 scope.go:117] "RemoveContainer" containerID="636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9" Nov 23 08:02:53 crc kubenswrapper[4681]: E1123 08:02:53.843094 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9\": container with ID starting with 636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9 not found: ID does not exist" containerID="636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.843224 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9"} err="failed to get container status \"636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9\": rpc error: code = NotFound desc = could not find container \"636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9\": container with ID starting with 636401aee1808aa94581b36e0c56795c8549117936bcbe7d2538ed17d40314f9 not found: ID does not exist" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.843905 4681 scope.go:117] "RemoveContainer" containerID="30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d" Nov 23 08:02:53 crc kubenswrapper[4681]: E1123 08:02:53.844281 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d\": container with ID starting with 30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d not found: ID does not exist" containerID="30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d" Nov 23 08:02:53 crc kubenswrapper[4681]: I1123 08:02:53.844315 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d"} err="failed to get container status \"30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d\": rpc error: code = NotFound desc = could not find container \"30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d\": container with ID starting with 30d7a94cbe1d6d3854faf78ce8305a0e881f699335c45defeef6cc3e8ab5969d not found: ID does not exist" Nov 23 08:02:55 crc kubenswrapper[4681]: I1123 08:02:55.264098 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" path="/var/lib/kubelet/pods/9b684cbf-5d28-42cb-a454-991479bbd898/volumes" Nov 23 08:03:12 crc kubenswrapper[4681]: I1123 08:03:12.295936 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:03:12 crc kubenswrapper[4681]: I1123 08:03:12.296511 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:03:42 crc kubenswrapper[4681]: I1123 08:03:42.295295 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:03:42 crc kubenswrapper[4681]: I1123 08:03:42.295681 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:04:12 crc kubenswrapper[4681]: I1123 08:04:12.296169 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:04:12 crc kubenswrapper[4681]: I1123 08:04:12.296653 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:04:12 crc kubenswrapper[4681]: I1123 08:04:12.296696 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:04:12 crc kubenswrapper[4681]: I1123 08:04:12.297341 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a868803893c24a99ca133b07873d005f27c84ed164c57f36e111486533a2a1a7"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:04:12 crc kubenswrapper[4681]: I1123 08:04:12.297391 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://a868803893c24a99ca133b07873d005f27c84ed164c57f36e111486533a2a1a7" gracePeriod=600 Nov 23 08:04:13 crc kubenswrapper[4681]: I1123 08:04:13.333073 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="a868803893c24a99ca133b07873d005f27c84ed164c57f36e111486533a2a1a7" exitCode=0 Nov 23 08:04:13 crc kubenswrapper[4681]: I1123 08:04:13.333150 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"a868803893c24a99ca133b07873d005f27c84ed164c57f36e111486533a2a1a7"} Nov 23 08:04:13 crc kubenswrapper[4681]: I1123 08:04:13.333420 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f"} Nov 23 08:04:13 crc kubenswrapper[4681]: I1123 08:04:13.333440 4681 scope.go:117] "RemoveContainer" containerID="8ecb71e0782ffdd11df2420ddd61c63edf18bad75a2e31f833a8ec36c1f22137" Nov 23 08:05:13 crc kubenswrapper[4681]: I1123 08:05:13.886313 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l5c6z"] Nov 23 08:05:13 crc kubenswrapper[4681]: E1123 08:05:13.887090 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="extract-utilities" Nov 23 08:05:13 crc kubenswrapper[4681]: I1123 08:05:13.887103 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="extract-utilities" Nov 23 08:05:13 crc kubenswrapper[4681]: E1123 08:05:13.887118 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="registry-server" Nov 23 08:05:13 crc kubenswrapper[4681]: I1123 08:05:13.887123 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="registry-server" Nov 23 08:05:13 crc kubenswrapper[4681]: E1123 08:05:13.887142 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="extract-content" Nov 23 08:05:13 crc kubenswrapper[4681]: I1123 08:05:13.887148 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="extract-content" Nov 23 08:05:13 crc kubenswrapper[4681]: I1123 08:05:13.887571 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b684cbf-5d28-42cb-a454-991479bbd898" containerName="registry-server" Nov 23 08:05:13 crc kubenswrapper[4681]: I1123 08:05:13.889205 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:13 crc kubenswrapper[4681]: I1123 08:05:13.895226 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5c6z"] Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.071912 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-utilities\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.072010 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-catalog-content\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.072079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgswb\" (UniqueName: \"kubernetes.io/projected/0f4e9746-0811-4377-ad98-031dd7f7319f-kube-api-access-zgswb\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.173759 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-utilities\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.173847 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-catalog-content\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.173886 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgswb\" (UniqueName: \"kubernetes.io/projected/0f4e9746-0811-4377-ad98-031dd7f7319f-kube-api-access-zgswb\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.174512 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-utilities\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.174816 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-catalog-content\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.191511 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgswb\" (UniqueName: \"kubernetes.io/projected/0f4e9746-0811-4377-ad98-031dd7f7319f-kube-api-access-zgswb\") pod \"certified-operators-l5c6z\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.204102 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:14 crc kubenswrapper[4681]: I1123 08:05:14.770894 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5c6z"] Nov 23 08:05:15 crc kubenswrapper[4681]: I1123 08:05:15.744678 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerID="7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e" exitCode=0 Nov 23 08:05:15 crc kubenswrapper[4681]: I1123 08:05:15.744716 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5c6z" event={"ID":"0f4e9746-0811-4377-ad98-031dd7f7319f","Type":"ContainerDied","Data":"7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e"} Nov 23 08:05:15 crc kubenswrapper[4681]: I1123 08:05:15.744740 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5c6z" event={"ID":"0f4e9746-0811-4377-ad98-031dd7f7319f","Type":"ContainerStarted","Data":"2c77a06b8bb9715943150967fd2a7c82ba43fc619cd8565ab9f377433b7ae161"} Nov 23 08:05:16 crc kubenswrapper[4681]: I1123 08:05:16.753550 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5c6z" event={"ID":"0f4e9746-0811-4377-ad98-031dd7f7319f","Type":"ContainerStarted","Data":"d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29"} Nov 23 08:05:17 crc kubenswrapper[4681]: I1123 08:05:17.761488 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerID="d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29" exitCode=0 Nov 23 08:05:17 crc kubenswrapper[4681]: I1123 08:05:17.761523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5c6z" event={"ID":"0f4e9746-0811-4377-ad98-031dd7f7319f","Type":"ContainerDied","Data":"d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29"} Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.683810 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ngv6j"] Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.686016 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.696098 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngv6j"] Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.746911 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rj8j\" (UniqueName: \"kubernetes.io/projected/13257e0a-efbd-479a-b131-70366b403387-kube-api-access-5rj8j\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.746961 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-utilities\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.747248 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-catalog-content\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.769967 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5c6z" event={"ID":"0f4e9746-0811-4377-ad98-031dd7f7319f","Type":"ContainerStarted","Data":"1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0"} Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.787895 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l5c6z" podStartSLOduration=3.305462393 podStartE2EDuration="5.787882093s" podCreationTimestamp="2025-11-23 08:05:13 +0000 UTC" firstStartedPulling="2025-11-23 08:05:15.746526325 +0000 UTC m=+4852.816035551" lastFinishedPulling="2025-11-23 08:05:18.228946013 +0000 UTC m=+4855.298455251" observedRunningTime="2025-11-23 08:05:18.783734837 +0000 UTC m=+4855.853244074" watchObservedRunningTime="2025-11-23 08:05:18.787882093 +0000 UTC m=+4855.857391330" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.848582 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rj8j\" (UniqueName: \"kubernetes.io/projected/13257e0a-efbd-479a-b131-70366b403387-kube-api-access-5rj8j\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.848630 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-utilities\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.848797 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-catalog-content\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.849027 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-utilities\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.849059 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-catalog-content\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:18 crc kubenswrapper[4681]: I1123 08:05:18.869521 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rj8j\" (UniqueName: \"kubernetes.io/projected/13257e0a-efbd-479a-b131-70366b403387-kube-api-access-5rj8j\") pod \"redhat-operators-ngv6j\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:19 crc kubenswrapper[4681]: I1123 08:05:19.003719 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:19 crc kubenswrapper[4681]: I1123 08:05:19.521837 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngv6j"] Nov 23 08:05:19 crc kubenswrapper[4681]: W1123 08:05:19.530578 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13257e0a_efbd_479a_b131_70366b403387.slice/crio-ea2dc29921d2726099a44df85fe282834ff580531e0062936498a36d872343b1 WatchSource:0}: Error finding container ea2dc29921d2726099a44df85fe282834ff580531e0062936498a36d872343b1: Status 404 returned error can't find the container with id ea2dc29921d2726099a44df85fe282834ff580531e0062936498a36d872343b1 Nov 23 08:05:19 crc kubenswrapper[4681]: I1123 08:05:19.778714 4681 generic.go:334] "Generic (PLEG): container finished" podID="13257e0a-efbd-479a-b131-70366b403387" containerID="5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861" exitCode=0 Nov 23 08:05:19 crc kubenswrapper[4681]: I1123 08:05:19.778996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngv6j" event={"ID":"13257e0a-efbd-479a-b131-70366b403387","Type":"ContainerDied","Data":"5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861"} Nov 23 08:05:19 crc kubenswrapper[4681]: I1123 08:05:19.779111 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngv6j" event={"ID":"13257e0a-efbd-479a-b131-70366b403387","Type":"ContainerStarted","Data":"ea2dc29921d2726099a44df85fe282834ff580531e0062936498a36d872343b1"} Nov 23 08:05:20 crc kubenswrapper[4681]: I1123 08:05:20.789468 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngv6j" event={"ID":"13257e0a-efbd-479a-b131-70366b403387","Type":"ContainerStarted","Data":"cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70"} Nov 23 08:05:22 crc kubenswrapper[4681]: I1123 08:05:22.807300 4681 generic.go:334] "Generic (PLEG): container finished" podID="13257e0a-efbd-479a-b131-70366b403387" containerID="cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70" exitCode=0 Nov 23 08:05:22 crc kubenswrapper[4681]: I1123 08:05:22.807621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngv6j" event={"ID":"13257e0a-efbd-479a-b131-70366b403387","Type":"ContainerDied","Data":"cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70"} Nov 23 08:05:23 crc kubenswrapper[4681]: I1123 08:05:23.817238 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngv6j" event={"ID":"13257e0a-efbd-479a-b131-70366b403387","Type":"ContainerStarted","Data":"2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2"} Nov 23 08:05:23 crc kubenswrapper[4681]: I1123 08:05:23.838277 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ngv6j" podStartSLOduration=2.360940117 podStartE2EDuration="5.838257957s" podCreationTimestamp="2025-11-23 08:05:18 +0000 UTC" firstStartedPulling="2025-11-23 08:05:19.780663032 +0000 UTC m=+4856.850172270" lastFinishedPulling="2025-11-23 08:05:23.257980873 +0000 UTC m=+4860.327490110" observedRunningTime="2025-11-23 08:05:23.834679351 +0000 UTC m=+4860.904188589" watchObservedRunningTime="2025-11-23 08:05:23.838257957 +0000 UTC m=+4860.907767194" Nov 23 08:05:24 crc kubenswrapper[4681]: I1123 08:05:24.205357 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:24 crc kubenswrapper[4681]: I1123 08:05:24.205415 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:24 crc kubenswrapper[4681]: I1123 08:05:24.242876 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:24 crc kubenswrapper[4681]: I1123 08:05:24.858985 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:26 crc kubenswrapper[4681]: I1123 08:05:26.678688 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l5c6z"] Nov 23 08:05:26 crc kubenswrapper[4681]: I1123 08:05:26.857560 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l5c6z" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="registry-server" containerID="cri-o://1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0" gracePeriod=2 Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.666176 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.761281 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-utilities\") pod \"0f4e9746-0811-4377-ad98-031dd7f7319f\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.761777 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgswb\" (UniqueName: \"kubernetes.io/projected/0f4e9746-0811-4377-ad98-031dd7f7319f-kube-api-access-zgswb\") pod \"0f4e9746-0811-4377-ad98-031dd7f7319f\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.761870 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-utilities" (OuterVolumeSpecName: "utilities") pod "0f4e9746-0811-4377-ad98-031dd7f7319f" (UID: "0f4e9746-0811-4377-ad98-031dd7f7319f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.761905 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-catalog-content\") pod \"0f4e9746-0811-4377-ad98-031dd7f7319f\" (UID: \"0f4e9746-0811-4377-ad98-031dd7f7319f\") " Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.763185 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.769126 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4e9746-0811-4377-ad98-031dd7f7319f-kube-api-access-zgswb" (OuterVolumeSpecName: "kube-api-access-zgswb") pod "0f4e9746-0811-4377-ad98-031dd7f7319f" (UID: "0f4e9746-0811-4377-ad98-031dd7f7319f"). InnerVolumeSpecName "kube-api-access-zgswb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.809543 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f4e9746-0811-4377-ad98-031dd7f7319f" (UID: "0f4e9746-0811-4377-ad98-031dd7f7319f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.866888 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgswb\" (UniqueName: \"kubernetes.io/projected/0f4e9746-0811-4377-ad98-031dd7f7319f-kube-api-access-zgswb\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.867256 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4e9746-0811-4377-ad98-031dd7f7319f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.870158 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerID="1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0" exitCode=0 Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.870204 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5c6z" event={"ID":"0f4e9746-0811-4377-ad98-031dd7f7319f","Type":"ContainerDied","Data":"1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0"} Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.870236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5c6z" event={"ID":"0f4e9746-0811-4377-ad98-031dd7f7319f","Type":"ContainerDied","Data":"2c77a06b8bb9715943150967fd2a7c82ba43fc619cd8565ab9f377433b7ae161"} Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.870257 4681 scope.go:117] "RemoveContainer" containerID="1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.870412 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5c6z" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.892865 4681 scope.go:117] "RemoveContainer" containerID="d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.914599 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l5c6z"] Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.921353 4681 scope.go:117] "RemoveContainer" containerID="7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.922163 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l5c6z"] Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.956684 4681 scope.go:117] "RemoveContainer" containerID="1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0" Nov 23 08:05:27 crc kubenswrapper[4681]: E1123 08:05:27.957087 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0\": container with ID starting with 1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0 not found: ID does not exist" containerID="1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.957140 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0"} err="failed to get container status \"1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0\": rpc error: code = NotFound desc = could not find container \"1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0\": container with ID starting with 1945ad906f723b940edebf755d79be2efdfffdbcace2b62f947322dc7ca4b9d0 not found: ID does not exist" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.957176 4681 scope.go:117] "RemoveContainer" containerID="d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29" Nov 23 08:05:27 crc kubenswrapper[4681]: E1123 08:05:27.957548 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29\": container with ID starting with d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29 not found: ID does not exist" containerID="d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.957589 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29"} err="failed to get container status \"d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29\": rpc error: code = NotFound desc = could not find container \"d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29\": container with ID starting with d615eeca03b5d255351a2d2d2559e60303618ba31d73c0caaf64b946032eef29 not found: ID does not exist" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.957616 4681 scope.go:117] "RemoveContainer" containerID="7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e" Nov 23 08:05:27 crc kubenswrapper[4681]: E1123 08:05:27.957915 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e\": container with ID starting with 7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e not found: ID does not exist" containerID="7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e" Nov 23 08:05:27 crc kubenswrapper[4681]: I1123 08:05:27.957955 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e"} err="failed to get container status \"7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e\": rpc error: code = NotFound desc = could not find container \"7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e\": container with ID starting with 7c75a2a7b3cbd18bf4f2033f1530c2c912ac305b96750c3bb29cf985a74b5a2e not found: ID does not exist" Nov 23 08:05:29 crc kubenswrapper[4681]: I1123 08:05:29.004257 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:29 crc kubenswrapper[4681]: I1123 08:05:29.004651 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:29 crc kubenswrapper[4681]: I1123 08:05:29.261805 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" path="/var/lib/kubelet/pods/0f4e9746-0811-4377-ad98-031dd7f7319f/volumes" Nov 23 08:05:30 crc kubenswrapper[4681]: I1123 08:05:30.041779 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ngv6j" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="registry-server" probeResult="failure" output=< Nov 23 08:05:30 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:05:30 crc kubenswrapper[4681]: > Nov 23 08:05:39 crc kubenswrapper[4681]: I1123 08:05:39.043532 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:39 crc kubenswrapper[4681]: I1123 08:05:39.080177 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:39 crc kubenswrapper[4681]: I1123 08:05:39.271729 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngv6j"] Nov 23 08:05:40 crc kubenswrapper[4681]: I1123 08:05:40.971250 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ngv6j" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="registry-server" containerID="cri-o://2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2" gracePeriod=2 Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.412395 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.493674 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rj8j\" (UniqueName: \"kubernetes.io/projected/13257e0a-efbd-479a-b131-70366b403387-kube-api-access-5rj8j\") pod \"13257e0a-efbd-479a-b131-70366b403387\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.493862 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-utilities\") pod \"13257e0a-efbd-479a-b131-70366b403387\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.493971 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-catalog-content\") pod \"13257e0a-efbd-479a-b131-70366b403387\" (UID: \"13257e0a-efbd-479a-b131-70366b403387\") " Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.494611 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-utilities" (OuterVolumeSpecName: "utilities") pod "13257e0a-efbd-479a-b131-70366b403387" (UID: "13257e0a-efbd-479a-b131-70366b403387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.494970 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.501341 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13257e0a-efbd-479a-b131-70366b403387-kube-api-access-5rj8j" (OuterVolumeSpecName: "kube-api-access-5rj8j") pod "13257e0a-efbd-479a-b131-70366b403387" (UID: "13257e0a-efbd-479a-b131-70366b403387"). InnerVolumeSpecName "kube-api-access-5rj8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.565675 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13257e0a-efbd-479a-b131-70366b403387" (UID: "13257e0a-efbd-479a-b131-70366b403387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.596680 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rj8j\" (UniqueName: \"kubernetes.io/projected/13257e0a-efbd-479a-b131-70366b403387-kube-api-access-5rj8j\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.596858 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13257e0a-efbd-479a-b131-70366b403387-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.979575 4681 generic.go:334] "Generic (PLEG): container finished" podID="13257e0a-efbd-479a-b131-70366b403387" containerID="2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2" exitCode=0 Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.979613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngv6j" event={"ID":"13257e0a-efbd-479a-b131-70366b403387","Type":"ContainerDied","Data":"2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2"} Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.979638 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngv6j" event={"ID":"13257e0a-efbd-479a-b131-70366b403387","Type":"ContainerDied","Data":"ea2dc29921d2726099a44df85fe282834ff580531e0062936498a36d872343b1"} Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.979637 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngv6j" Nov 23 08:05:41 crc kubenswrapper[4681]: I1123 08:05:41.979714 4681 scope.go:117] "RemoveContainer" containerID="2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.004653 4681 scope.go:117] "RemoveContainer" containerID="cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.006008 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngv6j"] Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.013856 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ngv6j"] Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.023234 4681 scope.go:117] "RemoveContainer" containerID="5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.053064 4681 scope.go:117] "RemoveContainer" containerID="2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2" Nov 23 08:05:42 crc kubenswrapper[4681]: E1123 08:05:42.053568 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2\": container with ID starting with 2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2 not found: ID does not exist" containerID="2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.053612 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2"} err="failed to get container status \"2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2\": rpc error: code = NotFound desc = could not find container \"2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2\": container with ID starting with 2ef1ea6c22e6a33021dce3f73bfd016d3f620a7983da7bcb7009fe0f37895dc2 not found: ID does not exist" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.053636 4681 scope.go:117] "RemoveContainer" containerID="cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70" Nov 23 08:05:42 crc kubenswrapper[4681]: E1123 08:05:42.054016 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70\": container with ID starting with cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70 not found: ID does not exist" containerID="cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.054052 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70"} err="failed to get container status \"cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70\": rpc error: code = NotFound desc = could not find container \"cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70\": container with ID starting with cb26fe79586e05d97563c6cf90b4faee7efc73f1b805977e9141908bc0212b70 not found: ID does not exist" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.054076 4681 scope.go:117] "RemoveContainer" containerID="5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861" Nov 23 08:05:42 crc kubenswrapper[4681]: E1123 08:05:42.054422 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861\": container with ID starting with 5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861 not found: ID does not exist" containerID="5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861" Nov 23 08:05:42 crc kubenswrapper[4681]: I1123 08:05:42.054445 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861"} err="failed to get container status \"5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861\": rpc error: code = NotFound desc = could not find container \"5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861\": container with ID starting with 5c8f0fdc7fa5a60a10049f454a5cb855259291c815d02416dcccc03559f1b861 not found: ID does not exist" Nov 23 08:05:43 crc kubenswrapper[4681]: I1123 08:05:43.260308 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13257e0a-efbd-479a-b131-70366b403387" path="/var/lib/kubelet/pods/13257e0a-efbd-479a-b131-70366b403387/volumes" Nov 23 08:06:12 crc kubenswrapper[4681]: I1123 08:06:12.295406 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:06:12 crc kubenswrapper[4681]: I1123 08:06:12.295786 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:06:42 crc kubenswrapper[4681]: I1123 08:06:42.296215 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:06:42 crc kubenswrapper[4681]: I1123 08:06:42.297097 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.120263 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sktw2"] Nov 23 08:06:58 crc kubenswrapper[4681]: E1123 08:06:58.121137 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="extract-utilities" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121151 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="extract-utilities" Nov 23 08:06:58 crc kubenswrapper[4681]: E1123 08:06:58.121160 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="registry-server" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121165 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="registry-server" Nov 23 08:06:58 crc kubenswrapper[4681]: E1123 08:06:58.121200 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="extract-content" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121205 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="extract-content" Nov 23 08:06:58 crc kubenswrapper[4681]: E1123 08:06:58.121215 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="registry-server" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121220 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="registry-server" Nov 23 08:06:58 crc kubenswrapper[4681]: E1123 08:06:58.121229 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="extract-content" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121234 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="extract-content" Nov 23 08:06:58 crc kubenswrapper[4681]: E1123 08:06:58.121251 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="extract-utilities" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121256 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="extract-utilities" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121652 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4e9746-0811-4377-ad98-031dd7f7319f" containerName="registry-server" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.121723 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="13257e0a-efbd-479a-b131-70366b403387" containerName="registry-server" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.124074 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.140106 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sktw2"] Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.146730 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-catalog-content\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.146788 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82b4k\" (UniqueName: \"kubernetes.io/projected/31d1ce15-700f-47fb-b2c3-674874e8c615-kube-api-access-82b4k\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.146845 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-utilities\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.249002 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-catalog-content\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.249061 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82b4k\" (UniqueName: \"kubernetes.io/projected/31d1ce15-700f-47fb-b2c3-674874e8c615-kube-api-access-82b4k\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.249085 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-utilities\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.249562 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-utilities\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.249676 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-catalog-content\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.278513 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82b4k\" (UniqueName: \"kubernetes.io/projected/31d1ce15-700f-47fb-b2c3-674874e8c615-kube-api-access-82b4k\") pod \"community-operators-sktw2\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.438825 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:06:58 crc kubenswrapper[4681]: I1123 08:06:58.951113 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sktw2"] Nov 23 08:06:59 crc kubenswrapper[4681]: I1123 08:06:59.559404 4681 generic.go:334] "Generic (PLEG): container finished" podID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerID="59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82" exitCode=0 Nov 23 08:06:59 crc kubenswrapper[4681]: I1123 08:06:59.560106 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sktw2" event={"ID":"31d1ce15-700f-47fb-b2c3-674874e8c615","Type":"ContainerDied","Data":"59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82"} Nov 23 08:06:59 crc kubenswrapper[4681]: I1123 08:06:59.560145 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sktw2" event={"ID":"31d1ce15-700f-47fb-b2c3-674874e8c615","Type":"ContainerStarted","Data":"dc9508e66fceeeea54533aa179b0d4274f0ff572b89844a155ec7fdcde878016"} Nov 23 08:07:01 crc kubenswrapper[4681]: I1123 08:07:01.580773 4681 generic.go:334] "Generic (PLEG): container finished" podID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerID="a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40" exitCode=0 Nov 23 08:07:01 crc kubenswrapper[4681]: I1123 08:07:01.580816 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sktw2" event={"ID":"31d1ce15-700f-47fb-b2c3-674874e8c615","Type":"ContainerDied","Data":"a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40"} Nov 23 08:07:02 crc kubenswrapper[4681]: I1123 08:07:02.592076 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sktw2" event={"ID":"31d1ce15-700f-47fb-b2c3-674874e8c615","Type":"ContainerStarted","Data":"26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2"} Nov 23 08:07:02 crc kubenswrapper[4681]: I1123 08:07:02.617375 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sktw2" podStartSLOduration=2.059103326 podStartE2EDuration="4.617353996s" podCreationTimestamp="2025-11-23 08:06:58 +0000 UTC" firstStartedPulling="2025-11-23 08:06:59.562476419 +0000 UTC m=+4956.631985657" lastFinishedPulling="2025-11-23 08:07:02.12072709 +0000 UTC m=+4959.190236327" observedRunningTime="2025-11-23 08:07:02.609516614 +0000 UTC m=+4959.679025850" watchObservedRunningTime="2025-11-23 08:07:02.617353996 +0000 UTC m=+4959.686863234" Nov 23 08:07:08 crc kubenswrapper[4681]: I1123 08:07:08.439958 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:07:08 crc kubenswrapper[4681]: I1123 08:07:08.440683 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:07:08 crc kubenswrapper[4681]: I1123 08:07:08.487014 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:07:08 crc kubenswrapper[4681]: I1123 08:07:08.703418 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:07:08 crc kubenswrapper[4681]: I1123 08:07:08.753345 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sktw2"] Nov 23 08:07:10 crc kubenswrapper[4681]: I1123 08:07:10.683629 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sktw2" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="registry-server" containerID="cri-o://26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2" gracePeriod=2 Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.177612 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.232221 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82b4k\" (UniqueName: \"kubernetes.io/projected/31d1ce15-700f-47fb-b2c3-674874e8c615-kube-api-access-82b4k\") pod \"31d1ce15-700f-47fb-b2c3-674874e8c615\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.232558 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-utilities\") pod \"31d1ce15-700f-47fb-b2c3-674874e8c615\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.232710 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-catalog-content\") pod \"31d1ce15-700f-47fb-b2c3-674874e8c615\" (UID: \"31d1ce15-700f-47fb-b2c3-674874e8c615\") " Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.233881 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-utilities" (OuterVolumeSpecName: "utilities") pod "31d1ce15-700f-47fb-b2c3-674874e8c615" (UID: "31d1ce15-700f-47fb-b2c3-674874e8c615"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.246799 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d1ce15-700f-47fb-b2c3-674874e8c615-kube-api-access-82b4k" (OuterVolumeSpecName: "kube-api-access-82b4k") pod "31d1ce15-700f-47fb-b2c3-674874e8c615" (UID: "31d1ce15-700f-47fb-b2c3-674874e8c615"). InnerVolumeSpecName "kube-api-access-82b4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.275321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31d1ce15-700f-47fb-b2c3-674874e8c615" (UID: "31d1ce15-700f-47fb-b2c3-674874e8c615"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.335759 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.335931 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d1ce15-700f-47fb-b2c3-674874e8c615-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.336020 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82b4k\" (UniqueName: \"kubernetes.io/projected/31d1ce15-700f-47fb-b2c3-674874e8c615-kube-api-access-82b4k\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.698290 4681 generic.go:334] "Generic (PLEG): container finished" podID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerID="26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2" exitCode=0 Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.698339 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sktw2" event={"ID":"31d1ce15-700f-47fb-b2c3-674874e8c615","Type":"ContainerDied","Data":"26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2"} Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.698390 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sktw2" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.698411 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sktw2" event={"ID":"31d1ce15-700f-47fb-b2c3-674874e8c615","Type":"ContainerDied","Data":"dc9508e66fceeeea54533aa179b0d4274f0ff572b89844a155ec7fdcde878016"} Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.698438 4681 scope.go:117] "RemoveContainer" containerID="26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.726653 4681 scope.go:117] "RemoveContainer" containerID="a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.764793 4681 scope.go:117] "RemoveContainer" containerID="59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.764958 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sktw2"] Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.776235 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sktw2"] Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.796707 4681 scope.go:117] "RemoveContainer" containerID="26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2" Nov 23 08:07:11 crc kubenswrapper[4681]: E1123 08:07:11.797169 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2\": container with ID starting with 26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2 not found: ID does not exist" containerID="26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.797211 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2"} err="failed to get container status \"26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2\": rpc error: code = NotFound desc = could not find container \"26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2\": container with ID starting with 26f76a55cd76ea7e07171ddb362b8cbf4e8faaa288ab321e933c29f8013005e2 not found: ID does not exist" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.797238 4681 scope.go:117] "RemoveContainer" containerID="a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40" Nov 23 08:07:11 crc kubenswrapper[4681]: E1123 08:07:11.797549 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40\": container with ID starting with a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40 not found: ID does not exist" containerID="a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.797605 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40"} err="failed to get container status \"a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40\": rpc error: code = NotFound desc = could not find container \"a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40\": container with ID starting with a9a5da3fb4b0e8d84f49db60bed8007f22781adc6ed50c68e1ba38a856422b40 not found: ID does not exist" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.797628 4681 scope.go:117] "RemoveContainer" containerID="59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82" Nov 23 08:07:11 crc kubenswrapper[4681]: E1123 08:07:11.797953 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82\": container with ID starting with 59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82 not found: ID does not exist" containerID="59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82" Nov 23 08:07:11 crc kubenswrapper[4681]: I1123 08:07:11.797975 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82"} err="failed to get container status \"59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82\": rpc error: code = NotFound desc = could not find container \"59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82\": container with ID starting with 59045017b8b065af63b1bff9e2d499d560ea6bd73b291bb550754be40d533a82 not found: ID does not exist" Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.295528 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.295581 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.295618 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.296033 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.296087 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" gracePeriod=600 Nov 23 08:07:12 crc kubenswrapper[4681]: E1123 08:07:12.417619 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.711573 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" exitCode=0 Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.711618 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f"} Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.711659 4681 scope.go:117] "RemoveContainer" containerID="a868803893c24a99ca133b07873d005f27c84ed164c57f36e111486533a2a1a7" Nov 23 08:07:12 crc kubenswrapper[4681]: I1123 08:07:12.712852 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:07:12 crc kubenswrapper[4681]: E1123 08:07:12.713695 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:07:13 crc kubenswrapper[4681]: I1123 08:07:13.264980 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" path="/var/lib/kubelet/pods/31d1ce15-700f-47fb-b2c3-674874e8c615/volumes" Nov 23 08:07:27 crc kubenswrapper[4681]: I1123 08:07:27.252548 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:07:27 crc kubenswrapper[4681]: E1123 08:07:27.253725 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:07:41 crc kubenswrapper[4681]: I1123 08:07:41.252259 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:07:41 crc kubenswrapper[4681]: E1123 08:07:41.253100 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:07:53 crc kubenswrapper[4681]: I1123 08:07:53.256352 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:07:53 crc kubenswrapper[4681]: E1123 08:07:53.257928 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:08:06 crc kubenswrapper[4681]: I1123 08:08:06.251756 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:08:06 crc kubenswrapper[4681]: E1123 08:08:06.252537 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:08:21 crc kubenswrapper[4681]: I1123 08:08:21.252403 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:08:21 crc kubenswrapper[4681]: E1123 08:08:21.253755 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:08:32 crc kubenswrapper[4681]: I1123 08:08:32.252305 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:08:32 crc kubenswrapper[4681]: E1123 08:08:32.253088 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:08:46 crc kubenswrapper[4681]: I1123 08:08:46.252170 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:08:46 crc kubenswrapper[4681]: E1123 08:08:46.253128 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:09:00 crc kubenswrapper[4681]: I1123 08:09:00.252439 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:09:00 crc kubenswrapper[4681]: E1123 08:09:00.253503 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:09:15 crc kubenswrapper[4681]: I1123 08:09:15.251537 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:09:15 crc kubenswrapper[4681]: E1123 08:09:15.252348 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:09:27 crc kubenswrapper[4681]: I1123 08:09:27.252325 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:09:27 crc kubenswrapper[4681]: E1123 08:09:27.253259 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:09:42 crc kubenswrapper[4681]: I1123 08:09:42.251611 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:09:42 crc kubenswrapper[4681]: E1123 08:09:42.252182 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:09:55 crc kubenswrapper[4681]: I1123 08:09:55.253028 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:09:55 crc kubenswrapper[4681]: E1123 08:09:55.254111 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:10:06 crc kubenswrapper[4681]: I1123 08:10:06.253228 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:10:06 crc kubenswrapper[4681]: E1123 08:10:06.254398 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:10:20 crc kubenswrapper[4681]: I1123 08:10:20.252077 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:10:20 crc kubenswrapper[4681]: E1123 08:10:20.252872 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:10:32 crc kubenswrapper[4681]: I1123 08:10:32.252327 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:10:32 crc kubenswrapper[4681]: E1123 08:10:32.253131 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:10:47 crc kubenswrapper[4681]: I1123 08:10:47.252319 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:10:47 crc kubenswrapper[4681]: E1123 08:10:47.255805 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:10:58 crc kubenswrapper[4681]: I1123 08:10:58.252214 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:10:58 crc kubenswrapper[4681]: E1123 08:10:58.253148 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:11:11 crc kubenswrapper[4681]: I1123 08:11:11.251864 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:11:11 crc kubenswrapper[4681]: E1123 08:11:11.253087 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:11:23 crc kubenswrapper[4681]: I1123 08:11:23.256948 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:11:23 crc kubenswrapper[4681]: E1123 08:11:23.257562 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:11:34 crc kubenswrapper[4681]: I1123 08:11:34.252281 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:11:34 crc kubenswrapper[4681]: E1123 08:11:34.252999 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:11:47 crc kubenswrapper[4681]: I1123 08:11:47.251999 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:11:47 crc kubenswrapper[4681]: E1123 08:11:47.253097 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:12:01 crc kubenswrapper[4681]: I1123 08:12:01.252081 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:12:01 crc kubenswrapper[4681]: E1123 08:12:01.253571 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:12:14 crc kubenswrapper[4681]: I1123 08:12:14.251621 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:12:15 crc kubenswrapper[4681]: I1123 08:12:15.240629 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"f3b67049999c07ad50acb700f89dfe77789502e8b62e4fa6dd0204b918283a04"} Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.841302 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7j5lg"] Nov 23 08:13:54 crc kubenswrapper[4681]: E1123 08:13:54.842215 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="registry-server" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.842228 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="registry-server" Nov 23 08:13:54 crc kubenswrapper[4681]: E1123 08:13:54.842269 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="extract-content" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.842275 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="extract-content" Nov 23 08:13:54 crc kubenswrapper[4681]: E1123 08:13:54.842289 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="extract-utilities" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.842294 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="extract-utilities" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.842517 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d1ce15-700f-47fb-b2c3-674874e8c615" containerName="registry-server" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.843831 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.855319 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j5lg"] Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.926200 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8svb\" (UniqueName: \"kubernetes.io/projected/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-kube-api-access-f8svb\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.926338 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-utilities\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:54 crc kubenswrapper[4681]: I1123 08:13:54.926448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-catalog-content\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.028642 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8svb\" (UniqueName: \"kubernetes.io/projected/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-kube-api-access-f8svb\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.028943 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-utilities\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.029091 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-catalog-content\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.029298 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-utilities\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.029666 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-catalog-content\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.047853 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8svb\" (UniqueName: \"kubernetes.io/projected/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-kube-api-access-f8svb\") pod \"redhat-marketplace-7j5lg\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.159732 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:13:55 crc kubenswrapper[4681]: W1123 08:13:55.631674 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb29c51aa_2f96_4bc0_911a_30c8f2261cc5.slice/crio-fe8c798e50aaba6b476dde00af93e31ff6e34c9ded3ef94eff8e21c0285bc353 WatchSource:0}: Error finding container fe8c798e50aaba6b476dde00af93e31ff6e34c9ded3ef94eff8e21c0285bc353: Status 404 returned error can't find the container with id fe8c798e50aaba6b476dde00af93e31ff6e34c9ded3ef94eff8e21c0285bc353 Nov 23 08:13:55 crc kubenswrapper[4681]: I1123 08:13:55.640951 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j5lg"] Nov 23 08:13:56 crc kubenswrapper[4681]: I1123 08:13:56.097099 4681 generic.go:334] "Generic (PLEG): container finished" podID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerID="e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97" exitCode=0 Nov 23 08:13:56 crc kubenswrapper[4681]: I1123 08:13:56.097179 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j5lg" event={"ID":"b29c51aa-2f96-4bc0-911a-30c8f2261cc5","Type":"ContainerDied","Data":"e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97"} Nov 23 08:13:56 crc kubenswrapper[4681]: I1123 08:13:56.097226 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j5lg" event={"ID":"b29c51aa-2f96-4bc0-911a-30c8f2261cc5","Type":"ContainerStarted","Data":"fe8c798e50aaba6b476dde00af93e31ff6e34c9ded3ef94eff8e21c0285bc353"} Nov 23 08:13:56 crc kubenswrapper[4681]: I1123 08:13:56.100710 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:13:57 crc kubenswrapper[4681]: I1123 08:13:57.109233 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j5lg" event={"ID":"b29c51aa-2f96-4bc0-911a-30c8f2261cc5","Type":"ContainerStarted","Data":"1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38"} Nov 23 08:13:58 crc kubenswrapper[4681]: I1123 08:13:58.123798 4681 generic.go:334] "Generic (PLEG): container finished" podID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerID="1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38" exitCode=0 Nov 23 08:13:58 crc kubenswrapper[4681]: I1123 08:13:58.123918 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j5lg" event={"ID":"b29c51aa-2f96-4bc0-911a-30c8f2261cc5","Type":"ContainerDied","Data":"1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38"} Nov 23 08:13:59 crc kubenswrapper[4681]: I1123 08:13:59.135212 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j5lg" event={"ID":"b29c51aa-2f96-4bc0-911a-30c8f2261cc5","Type":"ContainerStarted","Data":"cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb"} Nov 23 08:13:59 crc kubenswrapper[4681]: I1123 08:13:59.157842 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7j5lg" podStartSLOduration=2.651566298 podStartE2EDuration="5.157816876s" podCreationTimestamp="2025-11-23 08:13:54 +0000 UTC" firstStartedPulling="2025-11-23 08:13:56.10040877 +0000 UTC m=+5373.169918007" lastFinishedPulling="2025-11-23 08:13:58.606659347 +0000 UTC m=+5375.676168585" observedRunningTime="2025-11-23 08:13:59.150957767 +0000 UTC m=+5376.220467004" watchObservedRunningTime="2025-11-23 08:13:59.157816876 +0000 UTC m=+5376.227326113" Nov 23 08:14:05 crc kubenswrapper[4681]: I1123 08:14:05.160588 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:14:05 crc kubenswrapper[4681]: I1123 08:14:05.161278 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:14:05 crc kubenswrapper[4681]: I1123 08:14:05.201916 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:14:05 crc kubenswrapper[4681]: I1123 08:14:05.268265 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:14:05 crc kubenswrapper[4681]: I1123 08:14:05.435722 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j5lg"] Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.210162 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7j5lg" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="registry-server" containerID="cri-o://cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb" gracePeriod=2 Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.661077 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.710789 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-catalog-content\") pod \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.711068 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8svb\" (UniqueName: \"kubernetes.io/projected/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-kube-api-access-f8svb\") pod \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.711109 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-utilities\") pod \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\" (UID: \"b29c51aa-2f96-4bc0-911a-30c8f2261cc5\") " Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.713692 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-utilities" (OuterVolumeSpecName: "utilities") pod "b29c51aa-2f96-4bc0-911a-30c8f2261cc5" (UID: "b29c51aa-2f96-4bc0-911a-30c8f2261cc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.717195 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-kube-api-access-f8svb" (OuterVolumeSpecName: "kube-api-access-f8svb") pod "b29c51aa-2f96-4bc0-911a-30c8f2261cc5" (UID: "b29c51aa-2f96-4bc0-911a-30c8f2261cc5"). InnerVolumeSpecName "kube-api-access-f8svb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.727371 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b29c51aa-2f96-4bc0-911a-30c8f2261cc5" (UID: "b29c51aa-2f96-4bc0-911a-30c8f2261cc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.815037 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8svb\" (UniqueName: \"kubernetes.io/projected/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-kube-api-access-f8svb\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.815075 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:07 crc kubenswrapper[4681]: I1123 08:14:07.815086 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29c51aa-2f96-4bc0-911a-30c8f2261cc5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.222520 4681 generic.go:334] "Generic (PLEG): container finished" podID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerID="cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb" exitCode=0 Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.222600 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j5lg" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.222619 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j5lg" event={"ID":"b29c51aa-2f96-4bc0-911a-30c8f2261cc5","Type":"ContainerDied","Data":"cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb"} Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.223683 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j5lg" event={"ID":"b29c51aa-2f96-4bc0-911a-30c8f2261cc5","Type":"ContainerDied","Data":"fe8c798e50aaba6b476dde00af93e31ff6e34c9ded3ef94eff8e21c0285bc353"} Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.223771 4681 scope.go:117] "RemoveContainer" containerID="cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.247778 4681 scope.go:117] "RemoveContainer" containerID="1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.252853 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j5lg"] Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.259483 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j5lg"] Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.269797 4681 scope.go:117] "RemoveContainer" containerID="e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.306794 4681 scope.go:117] "RemoveContainer" containerID="cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb" Nov 23 08:14:08 crc kubenswrapper[4681]: E1123 08:14:08.307231 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb\": container with ID starting with cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb not found: ID does not exist" containerID="cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.307266 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb"} err="failed to get container status \"cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb\": rpc error: code = NotFound desc = could not find container \"cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb\": container with ID starting with cfbdd50315388f9120a8e893660cba75f056897cdb430d4c144145981e1eadbb not found: ID does not exist" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.307289 4681 scope.go:117] "RemoveContainer" containerID="1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38" Nov 23 08:14:08 crc kubenswrapper[4681]: E1123 08:14:08.307649 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38\": container with ID starting with 1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38 not found: ID does not exist" containerID="1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.307673 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38"} err="failed to get container status \"1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38\": rpc error: code = NotFound desc = could not find container \"1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38\": container with ID starting with 1790b794c712d37e20c2d3f14e22d686ed643af26e373fbf23230fa87c4fed38 not found: ID does not exist" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.307687 4681 scope.go:117] "RemoveContainer" containerID="e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97" Nov 23 08:14:08 crc kubenswrapper[4681]: E1123 08:14:08.307917 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97\": container with ID starting with e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97 not found: ID does not exist" containerID="e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97" Nov 23 08:14:08 crc kubenswrapper[4681]: I1123 08:14:08.307937 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97"} err="failed to get container status \"e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97\": rpc error: code = NotFound desc = could not find container \"e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97\": container with ID starting with e9dab3a30907f6aadcb0c7d47d25cb7961e4d5aae0584462afeff2c15c847e97 not found: ID does not exist" Nov 23 08:14:09 crc kubenswrapper[4681]: I1123 08:14:09.260662 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" path="/var/lib/kubelet/pods/b29c51aa-2f96-4bc0-911a-30c8f2261cc5/volumes" Nov 23 08:14:42 crc kubenswrapper[4681]: I1123 08:14:42.295816 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:14:42 crc kubenswrapper[4681]: I1123 08:14:42.296356 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.136666 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92"] Nov 23 08:15:00 crc kubenswrapper[4681]: E1123 08:15:00.137674 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="extract-content" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.137689 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="extract-content" Nov 23 08:15:00 crc kubenswrapper[4681]: E1123 08:15:00.137720 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="extract-utilities" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.137742 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="extract-utilities" Nov 23 08:15:00 crc kubenswrapper[4681]: E1123 08:15:00.137750 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="registry-server" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.137755 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="registry-server" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.137954 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29c51aa-2f96-4bc0-911a-30c8f2261cc5" containerName="registry-server" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.138578 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.141490 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.143170 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.157184 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92"] Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.312029 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-secret-volume\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.312419 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plkl6\" (UniqueName: \"kubernetes.io/projected/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-kube-api-access-plkl6\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.312566 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-config-volume\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.414587 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plkl6\" (UniqueName: \"kubernetes.io/projected/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-kube-api-access-plkl6\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.414684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-config-volume\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.414712 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-secret-volume\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.416129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-config-volume\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.421877 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-secret-volume\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.429568 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plkl6\" (UniqueName: \"kubernetes.io/projected/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-kube-api-access-plkl6\") pod \"collect-profiles-29398095-22h92\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.454852 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:00 crc kubenswrapper[4681]: I1123 08:15:00.852153 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92"] Nov 23 08:15:01 crc kubenswrapper[4681]: I1123 08:15:01.636157 4681 generic.go:334] "Generic (PLEG): container finished" podID="8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" containerID="3a39aaffcacd41f77be272896e12dc21e06439ed6613f2b2903580ca0b67ff24" exitCode=0 Nov 23 08:15:01 crc kubenswrapper[4681]: I1123 08:15:01.636491 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" event={"ID":"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9","Type":"ContainerDied","Data":"3a39aaffcacd41f77be272896e12dc21e06439ed6613f2b2903580ca0b67ff24"} Nov 23 08:15:01 crc kubenswrapper[4681]: I1123 08:15:01.636597 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" event={"ID":"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9","Type":"ContainerStarted","Data":"d99ec7928ddc86c626f906db66fb8222824b4da337e7d806f0b85ae4c93adf9f"} Nov 23 08:15:02 crc kubenswrapper[4681]: I1123 08:15:02.961083 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.063384 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-config-volume\") pod \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.063594 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plkl6\" (UniqueName: \"kubernetes.io/projected/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-kube-api-access-plkl6\") pod \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.063828 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-secret-volume\") pod \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\" (UID: \"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9\") " Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.064174 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" (UID: "8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.064871 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.070501 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" (UID: "8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.070689 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-kube-api-access-plkl6" (OuterVolumeSpecName: "kube-api-access-plkl6") pod "8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" (UID: "8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9"). InnerVolumeSpecName "kube-api-access-plkl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.167343 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.167375 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plkl6\" (UniqueName: \"kubernetes.io/projected/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9-kube-api-access-plkl6\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.659764 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" event={"ID":"8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9","Type":"ContainerDied","Data":"d99ec7928ddc86c626f906db66fb8222824b4da337e7d806f0b85ae4c93adf9f"} Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.659807 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99ec7928ddc86c626f906db66fb8222824b4da337e7d806f0b85ae4c93adf9f" Nov 23 08:15:03 crc kubenswrapper[4681]: I1123 08:15:03.659823 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92" Nov 23 08:15:04 crc kubenswrapper[4681]: I1123 08:15:04.025158 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s"] Nov 23 08:15:04 crc kubenswrapper[4681]: I1123 08:15:04.037293 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-pqx4s"] Nov 23 08:15:05 crc kubenswrapper[4681]: I1123 08:15:05.263444 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712f3249-3396-4198-85ad-a74af10b9c24" path="/var/lib/kubelet/pods/712f3249-3396-4198-85ad-a74af10b9c24/volumes" Nov 23 08:15:12 crc kubenswrapper[4681]: I1123 08:15:12.295682 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:15:12 crc kubenswrapper[4681]: I1123 08:15:12.296242 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.295963 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.296565 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.296611 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.297089 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3b67049999c07ad50acb700f89dfe77789502e8b62e4fa6dd0204b918283a04"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.297143 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://f3b67049999c07ad50acb700f89dfe77789502e8b62e4fa6dd0204b918283a04" gracePeriod=600 Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.995655 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="f3b67049999c07ad50acb700f89dfe77789502e8b62e4fa6dd0204b918283a04" exitCode=0 Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.996130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"f3b67049999c07ad50acb700f89dfe77789502e8b62e4fa6dd0204b918283a04"} Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.996172 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5"} Nov 23 08:15:42 crc kubenswrapper[4681]: I1123 08:15:42.996190 4681 scope.go:117] "RemoveContainer" containerID="6d7563356ec35cc7f255fa32e1554c261814b2cb897becc82645050ca40aae2f" Nov 23 08:15:47 crc kubenswrapper[4681]: I1123 08:15:47.994106 4681 scope.go:117] "RemoveContainer" containerID="1400425a50a57d3e4717335fe26b4dff258a4d9dd7a31eef5ba7e90660b4ab89" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.615667 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-69vfx"] Nov 23 08:16:16 crc kubenswrapper[4681]: E1123 08:16:16.616401 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" containerName="collect-profiles" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.616414 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" containerName="collect-profiles" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.616624 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" containerName="collect-profiles" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.617863 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.619514 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-catalog-content\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.619587 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-utilities\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.619754 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcsv\" (UniqueName: \"kubernetes.io/projected/72793cc8-15ca-4981-9577-442d58d15f0f-kube-api-access-5zcsv\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.635545 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69vfx"] Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.721528 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zcsv\" (UniqueName: \"kubernetes.io/projected/72793cc8-15ca-4981-9577-442d58d15f0f-kube-api-access-5zcsv\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.721831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-catalog-content\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.721915 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-utilities\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.722249 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-catalog-content\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.722348 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-utilities\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.739566 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zcsv\" (UniqueName: \"kubernetes.io/projected/72793cc8-15ca-4981-9577-442d58d15f0f-kube-api-access-5zcsv\") pod \"certified-operators-69vfx\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:16 crc kubenswrapper[4681]: I1123 08:16:16.933206 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:17 crc kubenswrapper[4681]: I1123 08:16:17.457948 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69vfx"] Nov 23 08:16:18 crc kubenswrapper[4681]: I1123 08:16:18.249430 4681 generic.go:334] "Generic (PLEG): container finished" podID="72793cc8-15ca-4981-9577-442d58d15f0f" containerID="2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4" exitCode=0 Nov 23 08:16:18 crc kubenswrapper[4681]: I1123 08:16:18.249526 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69vfx" event={"ID":"72793cc8-15ca-4981-9577-442d58d15f0f","Type":"ContainerDied","Data":"2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4"} Nov 23 08:16:18 crc kubenswrapper[4681]: I1123 08:16:18.250537 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69vfx" event={"ID":"72793cc8-15ca-4981-9577-442d58d15f0f","Type":"ContainerStarted","Data":"0a5c07091e21bb4de579b44aefbd8f0db0ce42bba4b8b9a8b861990dedfc4180"} Nov 23 08:16:19 crc kubenswrapper[4681]: I1123 08:16:19.259885 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69vfx" event={"ID":"72793cc8-15ca-4981-9577-442d58d15f0f","Type":"ContainerStarted","Data":"b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569"} Nov 23 08:16:20 crc kubenswrapper[4681]: I1123 08:16:20.268707 4681 generic.go:334] "Generic (PLEG): container finished" podID="72793cc8-15ca-4981-9577-442d58d15f0f" containerID="b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569" exitCode=0 Nov 23 08:16:20 crc kubenswrapper[4681]: I1123 08:16:20.268805 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69vfx" event={"ID":"72793cc8-15ca-4981-9577-442d58d15f0f","Type":"ContainerDied","Data":"b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569"} Nov 23 08:16:21 crc kubenswrapper[4681]: I1123 08:16:21.277620 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69vfx" event={"ID":"72793cc8-15ca-4981-9577-442d58d15f0f","Type":"ContainerStarted","Data":"b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4"} Nov 23 08:16:21 crc kubenswrapper[4681]: I1123 08:16:21.296174 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-69vfx" podStartSLOduration=2.707946255 podStartE2EDuration="5.296160052s" podCreationTimestamp="2025-11-23 08:16:16 +0000 UTC" firstStartedPulling="2025-11-23 08:16:18.251595044 +0000 UTC m=+5515.321104281" lastFinishedPulling="2025-11-23 08:16:20.839808841 +0000 UTC m=+5517.909318078" observedRunningTime="2025-11-23 08:16:21.289401443 +0000 UTC m=+5518.358910680" watchObservedRunningTime="2025-11-23 08:16:21.296160052 +0000 UTC m=+5518.365669288" Nov 23 08:16:23 crc kubenswrapper[4681]: I1123 08:16:23.999018 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9zxds"] Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.001054 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.010496 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zxds"] Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.060192 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-catalog-content\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.060472 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvc4\" (UniqueName: \"kubernetes.io/projected/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-kube-api-access-qmvc4\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.060726 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-utilities\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.161699 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-catalog-content\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.161746 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvc4\" (UniqueName: \"kubernetes.io/projected/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-kube-api-access-qmvc4\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.161789 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-utilities\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.162187 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-catalog-content\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.162203 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-utilities\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.177880 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvc4\" (UniqueName: \"kubernetes.io/projected/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-kube-api-access-qmvc4\") pod \"redhat-operators-9zxds\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.318137 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:24 crc kubenswrapper[4681]: I1123 08:16:24.760078 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zxds"] Nov 23 08:16:24 crc kubenswrapper[4681]: W1123 08:16:24.774275 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded3b1e41_d0b2_42bf_87dc_5d6a96b78d21.slice/crio-345c5a729e50a3d2b810728d67c137075d227f028592ef5f64c59933709240d0 WatchSource:0}: Error finding container 345c5a729e50a3d2b810728d67c137075d227f028592ef5f64c59933709240d0: Status 404 returned error can't find the container with id 345c5a729e50a3d2b810728d67c137075d227f028592ef5f64c59933709240d0 Nov 23 08:16:25 crc kubenswrapper[4681]: I1123 08:16:25.314437 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerID="b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0" exitCode=0 Nov 23 08:16:25 crc kubenswrapper[4681]: I1123 08:16:25.314591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zxds" event={"ID":"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21","Type":"ContainerDied","Data":"b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0"} Nov 23 08:16:25 crc kubenswrapper[4681]: I1123 08:16:25.314799 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zxds" event={"ID":"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21","Type":"ContainerStarted","Data":"345c5a729e50a3d2b810728d67c137075d227f028592ef5f64c59933709240d0"} Nov 23 08:16:26 crc kubenswrapper[4681]: I1123 08:16:26.323342 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zxds" event={"ID":"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21","Type":"ContainerStarted","Data":"5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355"} Nov 23 08:16:26 crc kubenswrapper[4681]: I1123 08:16:26.934311 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:26 crc kubenswrapper[4681]: I1123 08:16:26.934739 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:26 crc kubenswrapper[4681]: I1123 08:16:26.976609 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:27 crc kubenswrapper[4681]: I1123 08:16:27.368707 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:28 crc kubenswrapper[4681]: I1123 08:16:28.345672 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerID="5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355" exitCode=0 Nov 23 08:16:28 crc kubenswrapper[4681]: I1123 08:16:28.345848 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zxds" event={"ID":"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21","Type":"ContainerDied","Data":"5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355"} Nov 23 08:16:29 crc kubenswrapper[4681]: I1123 08:16:29.358348 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zxds" event={"ID":"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21","Type":"ContainerStarted","Data":"f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75"} Nov 23 08:16:29 crc kubenswrapper[4681]: I1123 08:16:29.378036 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9zxds" podStartSLOduration=2.835465615 podStartE2EDuration="6.378017152s" podCreationTimestamp="2025-11-23 08:16:23 +0000 UTC" firstStartedPulling="2025-11-23 08:16:25.31642297 +0000 UTC m=+5522.385932207" lastFinishedPulling="2025-11-23 08:16:28.858974507 +0000 UTC m=+5525.928483744" observedRunningTime="2025-11-23 08:16:29.373397194 +0000 UTC m=+5526.442906431" watchObservedRunningTime="2025-11-23 08:16:29.378017152 +0000 UTC m=+5526.447526389" Nov 23 08:16:29 crc kubenswrapper[4681]: I1123 08:16:29.394948 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69vfx"] Nov 23 08:16:29 crc kubenswrapper[4681]: I1123 08:16:29.395404 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-69vfx" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="registry-server" containerID="cri-o://b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4" gracePeriod=2 Nov 23 08:16:29 crc kubenswrapper[4681]: I1123 08:16:29.988030 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.106043 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-catalog-content\") pod \"72793cc8-15ca-4981-9577-442d58d15f0f\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.106122 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zcsv\" (UniqueName: \"kubernetes.io/projected/72793cc8-15ca-4981-9577-442d58d15f0f-kube-api-access-5zcsv\") pod \"72793cc8-15ca-4981-9577-442d58d15f0f\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.106232 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-utilities\") pod \"72793cc8-15ca-4981-9577-442d58d15f0f\" (UID: \"72793cc8-15ca-4981-9577-442d58d15f0f\") " Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.107823 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-utilities" (OuterVolumeSpecName: "utilities") pod "72793cc8-15ca-4981-9577-442d58d15f0f" (UID: "72793cc8-15ca-4981-9577-442d58d15f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.117266 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72793cc8-15ca-4981-9577-442d58d15f0f-kube-api-access-5zcsv" (OuterVolumeSpecName: "kube-api-access-5zcsv") pod "72793cc8-15ca-4981-9577-442d58d15f0f" (UID: "72793cc8-15ca-4981-9577-442d58d15f0f"). InnerVolumeSpecName "kube-api-access-5zcsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.161060 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72793cc8-15ca-4981-9577-442d58d15f0f" (UID: "72793cc8-15ca-4981-9577-442d58d15f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.209715 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.209750 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zcsv\" (UniqueName: \"kubernetes.io/projected/72793cc8-15ca-4981-9577-442d58d15f0f-kube-api-access-5zcsv\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.209769 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72793cc8-15ca-4981-9577-442d58d15f0f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.370910 4681 generic.go:334] "Generic (PLEG): container finished" podID="72793cc8-15ca-4981-9577-442d58d15f0f" containerID="b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4" exitCode=0 Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.370958 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69vfx" event={"ID":"72793cc8-15ca-4981-9577-442d58d15f0f","Type":"ContainerDied","Data":"b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4"} Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.370988 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69vfx" event={"ID":"72793cc8-15ca-4981-9577-442d58d15f0f","Type":"ContainerDied","Data":"0a5c07091e21bb4de579b44aefbd8f0db0ce42bba4b8b9a8b861990dedfc4180"} Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.371008 4681 scope.go:117] "RemoveContainer" containerID="b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.371164 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69vfx" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.401499 4681 scope.go:117] "RemoveContainer" containerID="b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.407930 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69vfx"] Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.415399 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-69vfx"] Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.425259 4681 scope.go:117] "RemoveContainer" containerID="2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.457848 4681 scope.go:117] "RemoveContainer" containerID="b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4" Nov 23 08:16:30 crc kubenswrapper[4681]: E1123 08:16:30.458199 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4\": container with ID starting with b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4 not found: ID does not exist" containerID="b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.458249 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4"} err="failed to get container status \"b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4\": rpc error: code = NotFound desc = could not find container \"b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4\": container with ID starting with b1eff6946fada731538469630066d0163b29ac2f5277844dbb24763c48ae1cc4 not found: ID does not exist" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.458283 4681 scope.go:117] "RemoveContainer" containerID="b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569" Nov 23 08:16:30 crc kubenswrapper[4681]: E1123 08:16:30.458601 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569\": container with ID starting with b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569 not found: ID does not exist" containerID="b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.458640 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569"} err="failed to get container status \"b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569\": rpc error: code = NotFound desc = could not find container \"b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569\": container with ID starting with b1c731fff4d801ba93710e3312b1522cdb03b8733fe7e385614b43f4c652b569 not found: ID does not exist" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.458667 4681 scope.go:117] "RemoveContainer" containerID="2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4" Nov 23 08:16:30 crc kubenswrapper[4681]: E1123 08:16:30.458997 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4\": container with ID starting with 2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4 not found: ID does not exist" containerID="2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4" Nov 23 08:16:30 crc kubenswrapper[4681]: I1123 08:16:30.459029 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4"} err="failed to get container status \"2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4\": rpc error: code = NotFound desc = could not find container \"2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4\": container with ID starting with 2d1b41369fabd227f1769e2f5e3d9fcdddde964ded31291f40924d0197f9f7a4 not found: ID does not exist" Nov 23 08:16:31 crc kubenswrapper[4681]: I1123 08:16:31.261496 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" path="/var/lib/kubelet/pods/72793cc8-15ca-4981-9577-442d58d15f0f/volumes" Nov 23 08:16:34 crc kubenswrapper[4681]: I1123 08:16:34.319868 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:34 crc kubenswrapper[4681]: I1123 08:16:34.320631 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:35 crc kubenswrapper[4681]: I1123 08:16:35.359367 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9zxds" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="registry-server" probeResult="failure" output=< Nov 23 08:16:35 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:16:35 crc kubenswrapper[4681]: > Nov 23 08:16:37 crc kubenswrapper[4681]: E1123 08:16:37.269936 4681 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.26.82:36094->192.168.26.82:41655: read tcp 192.168.26.82:36094->192.168.26.82:41655: read: connection reset by peer Nov 23 08:16:37 crc kubenswrapper[4681]: E1123 08:16:37.270108 4681 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.82:36094->192.168.26.82:41655: write tcp 192.168.26.82:36094->192.168.26.82:41655: write: broken pipe Nov 23 08:16:44 crc kubenswrapper[4681]: I1123 08:16:44.360134 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:44 crc kubenswrapper[4681]: I1123 08:16:44.408232 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:44 crc kubenswrapper[4681]: I1123 08:16:44.596340 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9zxds"] Nov 23 08:16:45 crc kubenswrapper[4681]: I1123 08:16:45.525007 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9zxds" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="registry-server" containerID="cri-o://f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75" gracePeriod=2 Nov 23 08:16:45 crc kubenswrapper[4681]: I1123 08:16:45.934955 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:45 crc kubenswrapper[4681]: I1123 08:16:45.969818 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-utilities\") pod \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " Nov 23 08:16:45 crc kubenswrapper[4681]: I1123 08:16:45.969961 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-catalog-content\") pod \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " Nov 23 08:16:45 crc kubenswrapper[4681]: I1123 08:16:45.970216 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmvc4\" (UniqueName: \"kubernetes.io/projected/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-kube-api-access-qmvc4\") pod \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\" (UID: \"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21\") " Nov 23 08:16:45 crc kubenswrapper[4681]: I1123 08:16:45.971341 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-utilities" (OuterVolumeSpecName: "utilities") pod "ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" (UID: "ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:16:45 crc kubenswrapper[4681]: I1123 08:16:45.978727 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-kube-api-access-qmvc4" (OuterVolumeSpecName: "kube-api-access-qmvc4") pod "ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" (UID: "ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21"). InnerVolumeSpecName "kube-api-access-qmvc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.054548 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" (UID: "ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.073034 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.073059 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.073072 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmvc4\" (UniqueName: \"kubernetes.io/projected/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21-kube-api-access-qmvc4\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.536857 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerID="f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75" exitCode=0 Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.536906 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zxds" event={"ID":"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21","Type":"ContainerDied","Data":"f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75"} Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.536983 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zxds" event={"ID":"ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21","Type":"ContainerDied","Data":"345c5a729e50a3d2b810728d67c137075d227f028592ef5f64c59933709240d0"} Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.537006 4681 scope.go:117] "RemoveContainer" containerID="f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.537660 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zxds" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.567226 4681 scope.go:117] "RemoveContainer" containerID="5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.567943 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9zxds"] Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.575030 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9zxds"] Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.587430 4681 scope.go:117] "RemoveContainer" containerID="b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.621705 4681 scope.go:117] "RemoveContainer" containerID="f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75" Nov 23 08:16:46 crc kubenswrapper[4681]: E1123 08:16:46.621999 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75\": container with ID starting with f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75 not found: ID does not exist" containerID="f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.622030 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75"} err="failed to get container status \"f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75\": rpc error: code = NotFound desc = could not find container \"f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75\": container with ID starting with f2b2c779f6b7f699148af3c9f33ea7e1cef4612d31d5eeda20b0c1654a0aec75 not found: ID does not exist" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.622049 4681 scope.go:117] "RemoveContainer" containerID="5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355" Nov 23 08:16:46 crc kubenswrapper[4681]: E1123 08:16:46.622261 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355\": container with ID starting with 5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355 not found: ID does not exist" containerID="5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.622283 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355"} err="failed to get container status \"5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355\": rpc error: code = NotFound desc = could not find container \"5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355\": container with ID starting with 5be6245cfdcc1a3a1eadd007c775a9d3ba562b18019cea2064e51e3345dce355 not found: ID does not exist" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.622295 4681 scope.go:117] "RemoveContainer" containerID="b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0" Nov 23 08:16:46 crc kubenswrapper[4681]: E1123 08:16:46.622474 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0\": container with ID starting with b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0 not found: ID does not exist" containerID="b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0" Nov 23 08:16:46 crc kubenswrapper[4681]: I1123 08:16:46.622494 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0"} err="failed to get container status \"b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0\": rpc error: code = NotFound desc = could not find container \"b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0\": container with ID starting with b359a0761aca8c6ee568b4effa983852deb6638856288a8b67038236771a04f0 not found: ID does not exist" Nov 23 08:16:47 crc kubenswrapper[4681]: I1123 08:16:47.260552 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" path="/var/lib/kubelet/pods/ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21/volumes" Nov 23 08:17:42 crc kubenswrapper[4681]: I1123 08:17:42.295543 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:17:42 crc kubenswrapper[4681]: I1123 08:17:42.296118 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:18:12 crc kubenswrapper[4681]: I1123 08:18:12.295602 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:18:12 crc kubenswrapper[4681]: I1123 08:18:12.296181 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.295332 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.295987 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.296040 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.296944 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.296999 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" gracePeriod=600 Nov 23 08:18:42 crc kubenswrapper[4681]: E1123 08:18:42.431582 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.539557 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" exitCode=0 Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.539605 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5"} Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.539649 4681 scope.go:117] "RemoveContainer" containerID="f3b67049999c07ad50acb700f89dfe77789502e8b62e4fa6dd0204b918283a04" Nov 23 08:18:42 crc kubenswrapper[4681]: I1123 08:18:42.541017 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:18:42 crc kubenswrapper[4681]: E1123 08:18:42.541329 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:18:57 crc kubenswrapper[4681]: I1123 08:18:57.251775 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:18:57 crc kubenswrapper[4681]: E1123 08:18:57.252762 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:19:12 crc kubenswrapper[4681]: I1123 08:19:12.252852 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:19:12 crc kubenswrapper[4681]: E1123 08:19:12.254048 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:19:23 crc kubenswrapper[4681]: I1123 08:19:23.279851 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:19:23 crc kubenswrapper[4681]: E1123 08:19:23.284905 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:19:35 crc kubenswrapper[4681]: I1123 08:19:35.252289 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:19:35 crc kubenswrapper[4681]: E1123 08:19:35.252879 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:19:46 crc kubenswrapper[4681]: I1123 08:19:46.252142 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:19:46 crc kubenswrapper[4681]: E1123 08:19:46.252952 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:19:57 crc kubenswrapper[4681]: I1123 08:19:57.251836 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:19:57 crc kubenswrapper[4681]: E1123 08:19:57.253120 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:20:10 crc kubenswrapper[4681]: I1123 08:20:10.251639 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:20:10 crc kubenswrapper[4681]: E1123 08:20:10.252252 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:20:22 crc kubenswrapper[4681]: I1123 08:20:22.252245 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:20:22 crc kubenswrapper[4681]: E1123 08:20:22.253012 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:20:34 crc kubenswrapper[4681]: I1123 08:20:34.251864 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:20:34 crc kubenswrapper[4681]: E1123 08:20:34.252485 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:20:45 crc kubenswrapper[4681]: I1123 08:20:45.253006 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:20:45 crc kubenswrapper[4681]: E1123 08:20:45.254162 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.275217 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l7czm"] Nov 23 08:20:55 crc kubenswrapper[4681]: E1123 08:20:55.276332 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="extract-content" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276345 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="extract-content" Nov 23 08:20:55 crc kubenswrapper[4681]: E1123 08:20:55.276364 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="registry-server" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276370 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="registry-server" Nov 23 08:20:55 crc kubenswrapper[4681]: E1123 08:20:55.276381 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="registry-server" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276387 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="registry-server" Nov 23 08:20:55 crc kubenswrapper[4681]: E1123 08:20:55.276401 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="extract-utilities" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276407 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="extract-utilities" Nov 23 08:20:55 crc kubenswrapper[4681]: E1123 08:20:55.276447 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="extract-content" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276453 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="extract-content" Nov 23 08:20:55 crc kubenswrapper[4681]: E1123 08:20:55.276484 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="extract-utilities" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276491 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="extract-utilities" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276738 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed3b1e41-d0b2-42bf-87dc-5d6a96b78d21" containerName="registry-server" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.276752 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="72793cc8-15ca-4981-9577-442d58d15f0f" containerName="registry-server" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.278442 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.280992 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7czm"] Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.400719 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-catalog-content\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.401037 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c794k\" (UniqueName: \"kubernetes.io/projected/67e4e26a-f252-4f35-a778-070f73261522-kube-api-access-c794k\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.401094 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-utilities\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.504164 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-catalog-content\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.504223 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c794k\" (UniqueName: \"kubernetes.io/projected/67e4e26a-f252-4f35-a778-070f73261522-kube-api-access-c794k\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.504267 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-utilities\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.504783 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-catalog-content\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.504796 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-utilities\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.523863 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c794k\" (UniqueName: \"kubernetes.io/projected/67e4e26a-f252-4f35-a778-070f73261522-kube-api-access-c794k\") pod \"community-operators-l7czm\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:55 crc kubenswrapper[4681]: I1123 08:20:55.617779 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:20:56 crc kubenswrapper[4681]: I1123 08:20:56.133625 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7czm"] Nov 23 08:20:56 crc kubenswrapper[4681]: I1123 08:20:56.251921 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:20:56 crc kubenswrapper[4681]: E1123 08:20:56.252346 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:20:56 crc kubenswrapper[4681]: I1123 08:20:56.606147 4681 generic.go:334] "Generic (PLEG): container finished" podID="67e4e26a-f252-4f35-a778-070f73261522" containerID="3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c" exitCode=0 Nov 23 08:20:56 crc kubenswrapper[4681]: I1123 08:20:56.606199 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7czm" event={"ID":"67e4e26a-f252-4f35-a778-070f73261522","Type":"ContainerDied","Data":"3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c"} Nov 23 08:20:56 crc kubenswrapper[4681]: I1123 08:20:56.606263 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7czm" event={"ID":"67e4e26a-f252-4f35-a778-070f73261522","Type":"ContainerStarted","Data":"98af294f4f533929fd38f223e12441c89be6eeed851625f6026d58fd615eea51"} Nov 23 08:20:56 crc kubenswrapper[4681]: I1123 08:20:56.608212 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:20:57 crc kubenswrapper[4681]: I1123 08:20:57.615893 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7czm" event={"ID":"67e4e26a-f252-4f35-a778-070f73261522","Type":"ContainerStarted","Data":"3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4"} Nov 23 08:20:58 crc kubenswrapper[4681]: I1123 08:20:58.625698 4681 generic.go:334] "Generic (PLEG): container finished" podID="67e4e26a-f252-4f35-a778-070f73261522" containerID="3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4" exitCode=0 Nov 23 08:20:58 crc kubenswrapper[4681]: I1123 08:20:58.625812 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7czm" event={"ID":"67e4e26a-f252-4f35-a778-070f73261522","Type":"ContainerDied","Data":"3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4"} Nov 23 08:20:59 crc kubenswrapper[4681]: I1123 08:20:59.633868 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7czm" event={"ID":"67e4e26a-f252-4f35-a778-070f73261522","Type":"ContainerStarted","Data":"4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157"} Nov 23 08:20:59 crc kubenswrapper[4681]: I1123 08:20:59.655842 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l7czm" podStartSLOduration=2.183541936 podStartE2EDuration="4.65582489s" podCreationTimestamp="2025-11-23 08:20:55 +0000 UTC" firstStartedPulling="2025-11-23 08:20:56.60789022 +0000 UTC m=+5793.677399457" lastFinishedPulling="2025-11-23 08:20:59.080173174 +0000 UTC m=+5796.149682411" observedRunningTime="2025-11-23 08:20:59.648851056 +0000 UTC m=+5796.718360293" watchObservedRunningTime="2025-11-23 08:20:59.65582489 +0000 UTC m=+5796.725334127" Nov 23 08:21:05 crc kubenswrapper[4681]: I1123 08:21:05.618636 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:21:05 crc kubenswrapper[4681]: I1123 08:21:05.619050 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:21:05 crc kubenswrapper[4681]: I1123 08:21:05.653161 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:21:05 crc kubenswrapper[4681]: I1123 08:21:05.707611 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:21:06 crc kubenswrapper[4681]: I1123 08:21:06.051929 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7czm"] Nov 23 08:21:07 crc kubenswrapper[4681]: I1123 08:21:07.688783 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l7czm" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="registry-server" containerID="cri-o://4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157" gracePeriod=2 Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.066434 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.232946 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-utilities\") pod \"67e4e26a-f252-4f35-a778-070f73261522\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.233094 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c794k\" (UniqueName: \"kubernetes.io/projected/67e4e26a-f252-4f35-a778-070f73261522-kube-api-access-c794k\") pod \"67e4e26a-f252-4f35-a778-070f73261522\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.233126 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-catalog-content\") pod \"67e4e26a-f252-4f35-a778-070f73261522\" (UID: \"67e4e26a-f252-4f35-a778-070f73261522\") " Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.233607 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-utilities" (OuterVolumeSpecName: "utilities") pod "67e4e26a-f252-4f35-a778-070f73261522" (UID: "67e4e26a-f252-4f35-a778-070f73261522"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.233894 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.237479 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e4e26a-f252-4f35-a778-070f73261522-kube-api-access-c794k" (OuterVolumeSpecName: "kube-api-access-c794k") pod "67e4e26a-f252-4f35-a778-070f73261522" (UID: "67e4e26a-f252-4f35-a778-070f73261522"). InnerVolumeSpecName "kube-api-access-c794k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.269847 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67e4e26a-f252-4f35-a778-070f73261522" (UID: "67e4e26a-f252-4f35-a778-070f73261522"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.336587 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c794k\" (UniqueName: \"kubernetes.io/projected/67e4e26a-f252-4f35-a778-070f73261522-kube-api-access-c794k\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.336639 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e4e26a-f252-4f35-a778-070f73261522-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.697902 4681 generic.go:334] "Generic (PLEG): container finished" podID="67e4e26a-f252-4f35-a778-070f73261522" containerID="4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157" exitCode=0 Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.697950 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7czm" event={"ID":"67e4e26a-f252-4f35-a778-070f73261522","Type":"ContainerDied","Data":"4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157"} Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.697975 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7czm" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.697994 4681 scope.go:117] "RemoveContainer" containerID="4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.697981 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7czm" event={"ID":"67e4e26a-f252-4f35-a778-070f73261522","Type":"ContainerDied","Data":"98af294f4f533929fd38f223e12441c89be6eeed851625f6026d58fd615eea51"} Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.715409 4681 scope.go:117] "RemoveContainer" containerID="3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.729560 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7czm"] Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.736842 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l7czm"] Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.749130 4681 scope.go:117] "RemoveContainer" containerID="3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.771914 4681 scope.go:117] "RemoveContainer" containerID="4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157" Nov 23 08:21:08 crc kubenswrapper[4681]: E1123 08:21:08.772416 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157\": container with ID starting with 4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157 not found: ID does not exist" containerID="4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.772473 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157"} err="failed to get container status \"4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157\": rpc error: code = NotFound desc = could not find container \"4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157\": container with ID starting with 4b4141a4d19e6d48bdaf06d4fddc7a40fcfd26aded6ecd0358daca37707f4157 not found: ID does not exist" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.772499 4681 scope.go:117] "RemoveContainer" containerID="3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4" Nov 23 08:21:08 crc kubenswrapper[4681]: E1123 08:21:08.773005 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4\": container with ID starting with 3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4 not found: ID does not exist" containerID="3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.773052 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4"} err="failed to get container status \"3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4\": rpc error: code = NotFound desc = could not find container \"3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4\": container with ID starting with 3e4787dc2371c339c16090f02175ce128d9b5f3ea0ba9f86104895a774fa69f4 not found: ID does not exist" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.773067 4681 scope.go:117] "RemoveContainer" containerID="3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c" Nov 23 08:21:08 crc kubenswrapper[4681]: E1123 08:21:08.773376 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c\": container with ID starting with 3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c not found: ID does not exist" containerID="3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c" Nov 23 08:21:08 crc kubenswrapper[4681]: I1123 08:21:08.773398 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c"} err="failed to get container status \"3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c\": rpc error: code = NotFound desc = could not find container \"3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c\": container with ID starting with 3f4f3c4818ab7901353ed7146bd24274c6ae997fa2254e2d5de224ecde92142c not found: ID does not exist" Nov 23 08:21:09 crc kubenswrapper[4681]: I1123 08:21:09.252746 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:21:09 crc kubenswrapper[4681]: E1123 08:21:09.252960 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:21:09 crc kubenswrapper[4681]: I1123 08:21:09.261839 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e4e26a-f252-4f35-a778-070f73261522" path="/var/lib/kubelet/pods/67e4e26a-f252-4f35-a778-070f73261522/volumes" Nov 23 08:21:21 crc kubenswrapper[4681]: I1123 08:21:21.253368 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:21:21 crc kubenswrapper[4681]: E1123 08:21:21.255250 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:21:33 crc kubenswrapper[4681]: I1123 08:21:33.258101 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:21:33 crc kubenswrapper[4681]: E1123 08:21:33.259041 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:21:47 crc kubenswrapper[4681]: I1123 08:21:47.252160 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:21:47 crc kubenswrapper[4681]: E1123 08:21:47.253037 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:22:00 crc kubenswrapper[4681]: I1123 08:22:00.251992 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:22:00 crc kubenswrapper[4681]: E1123 08:22:00.252877 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:22:15 crc kubenswrapper[4681]: I1123 08:22:15.254162 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:22:15 crc kubenswrapper[4681]: E1123 08:22:15.255079 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:22:27 crc kubenswrapper[4681]: I1123 08:22:27.251839 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:22:27 crc kubenswrapper[4681]: E1123 08:22:27.252787 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:22:40 crc kubenswrapper[4681]: I1123 08:22:40.251529 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:22:40 crc kubenswrapper[4681]: E1123 08:22:40.252506 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:22:54 crc kubenswrapper[4681]: I1123 08:22:54.251778 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:22:54 crc kubenswrapper[4681]: E1123 08:22:54.252608 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:23:08 crc kubenswrapper[4681]: I1123 08:23:08.251892 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:23:08 crc kubenswrapper[4681]: E1123 08:23:08.252649 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:23:22 crc kubenswrapper[4681]: I1123 08:23:22.251996 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:23:22 crc kubenswrapper[4681]: E1123 08:23:22.252601 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:23:36 crc kubenswrapper[4681]: I1123 08:23:36.252619 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:23:36 crc kubenswrapper[4681]: E1123 08:23:36.254967 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:23:51 crc kubenswrapper[4681]: I1123 08:23:51.252331 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:23:52 crc kubenswrapper[4681]: I1123 08:23:52.028420 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"06f801f5c38a38a16a89b057559b054ed85c5e9ba9b81b998a31f582df7f4bda"} Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.663609 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-87fbv"] Nov 23 08:24:27 crc kubenswrapper[4681]: E1123 08:24:27.664402 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="extract-content" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.664415 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="extract-content" Nov 23 08:24:27 crc kubenswrapper[4681]: E1123 08:24:27.664445 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="registry-server" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.664450 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="registry-server" Nov 23 08:24:27 crc kubenswrapper[4681]: E1123 08:24:27.664473 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="extract-utilities" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.664479 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="extract-utilities" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.664687 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e4e26a-f252-4f35-a778-070f73261522" containerName="registry-server" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.665908 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.676425 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-87fbv"] Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.737635 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-catalog-content\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.737720 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrpg\" (UniqueName: \"kubernetes.io/projected/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-kube-api-access-hgrpg\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.737791 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-utilities\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.838632 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-catalog-content\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.838724 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgrpg\" (UniqueName: \"kubernetes.io/projected/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-kube-api-access-hgrpg\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.838786 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-utilities\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.839088 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-catalog-content\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.839149 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-utilities\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.857112 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgrpg\" (UniqueName: \"kubernetes.io/projected/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-kube-api-access-hgrpg\") pod \"redhat-marketplace-87fbv\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:27 crc kubenswrapper[4681]: I1123 08:24:27.982525 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:28 crc kubenswrapper[4681]: I1123 08:24:28.425177 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-87fbv"] Nov 23 08:24:29 crc kubenswrapper[4681]: I1123 08:24:29.299083 4681 generic.go:334] "Generic (PLEG): container finished" podID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerID="7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504" exitCode=0 Nov 23 08:24:29 crc kubenswrapper[4681]: I1123 08:24:29.299144 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87fbv" event={"ID":"f6d34452-1579-447e-9e1d-1c1c2b5d1e58","Type":"ContainerDied","Data":"7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504"} Nov 23 08:24:29 crc kubenswrapper[4681]: I1123 08:24:29.299220 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87fbv" event={"ID":"f6d34452-1579-447e-9e1d-1c1c2b5d1e58","Type":"ContainerStarted","Data":"1bd2e637045ac32d1fbaac7d2784b311fdbe772b86403bb180304cb357943b25"} Nov 23 08:24:30 crc kubenswrapper[4681]: I1123 08:24:30.310297 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87fbv" event={"ID":"f6d34452-1579-447e-9e1d-1c1c2b5d1e58","Type":"ContainerStarted","Data":"8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a"} Nov 23 08:24:31 crc kubenswrapper[4681]: I1123 08:24:31.329831 4681 generic.go:334] "Generic (PLEG): container finished" podID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerID="8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a" exitCode=0 Nov 23 08:24:31 crc kubenswrapper[4681]: I1123 08:24:31.330836 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87fbv" event={"ID":"f6d34452-1579-447e-9e1d-1c1c2b5d1e58","Type":"ContainerDied","Data":"8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a"} Nov 23 08:24:32 crc kubenswrapper[4681]: I1123 08:24:32.341557 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87fbv" event={"ID":"f6d34452-1579-447e-9e1d-1c1c2b5d1e58","Type":"ContainerStarted","Data":"c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401"} Nov 23 08:24:32 crc kubenswrapper[4681]: I1123 08:24:32.364948 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-87fbv" podStartSLOduration=2.7898998280000002 podStartE2EDuration="5.364919028s" podCreationTimestamp="2025-11-23 08:24:27 +0000 UTC" firstStartedPulling="2025-11-23 08:24:29.30229213 +0000 UTC m=+6006.371801367" lastFinishedPulling="2025-11-23 08:24:31.877311331 +0000 UTC m=+6008.946820567" observedRunningTime="2025-11-23 08:24:32.357872464 +0000 UTC m=+6009.427381701" watchObservedRunningTime="2025-11-23 08:24:32.364919028 +0000 UTC m=+6009.434428265" Nov 23 08:24:37 crc kubenswrapper[4681]: I1123 08:24:37.983369 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:37 crc kubenswrapper[4681]: I1123 08:24:37.983879 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:38 crc kubenswrapper[4681]: I1123 08:24:38.021661 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:38 crc kubenswrapper[4681]: I1123 08:24:38.419598 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:38 crc kubenswrapper[4681]: I1123 08:24:38.457848 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-87fbv"] Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.401035 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-87fbv" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="registry-server" containerID="cri-o://c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401" gracePeriod=2 Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.798890 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.980020 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgrpg\" (UniqueName: \"kubernetes.io/projected/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-kube-api-access-hgrpg\") pod \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.980255 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-utilities\") pod \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.980291 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-catalog-content\") pod \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\" (UID: \"f6d34452-1579-447e-9e1d-1c1c2b5d1e58\") " Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.981327 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-utilities" (OuterVolumeSpecName: "utilities") pod "f6d34452-1579-447e-9e1d-1c1c2b5d1e58" (UID: "f6d34452-1579-447e-9e1d-1c1c2b5d1e58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.985511 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-kube-api-access-hgrpg" (OuterVolumeSpecName: "kube-api-access-hgrpg") pod "f6d34452-1579-447e-9e1d-1c1c2b5d1e58" (UID: "f6d34452-1579-447e-9e1d-1c1c2b5d1e58"). InnerVolumeSpecName "kube-api-access-hgrpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:40 crc kubenswrapper[4681]: I1123 08:24:40.995108 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6d34452-1579-447e-9e1d-1c1c2b5d1e58" (UID: "f6d34452-1579-447e-9e1d-1c1c2b5d1e58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.081693 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.081725 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.081737 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgrpg\" (UniqueName: \"kubernetes.io/projected/f6d34452-1579-447e-9e1d-1c1c2b5d1e58-kube-api-access-hgrpg\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.410196 4681 generic.go:334] "Generic (PLEG): container finished" podID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerID="c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401" exitCode=0 Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.410239 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87fbv" event={"ID":"f6d34452-1579-447e-9e1d-1c1c2b5d1e58","Type":"ContainerDied","Data":"c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401"} Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.410275 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87fbv" event={"ID":"f6d34452-1579-447e-9e1d-1c1c2b5d1e58","Type":"ContainerDied","Data":"1bd2e637045ac32d1fbaac7d2784b311fdbe772b86403bb180304cb357943b25"} Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.410293 4681 scope.go:117] "RemoveContainer" containerID="c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.410287 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87fbv" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.427663 4681 scope.go:117] "RemoveContainer" containerID="8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.431608 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-87fbv"] Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.437207 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-87fbv"] Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.453553 4681 scope.go:117] "RemoveContainer" containerID="7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.479670 4681 scope.go:117] "RemoveContainer" containerID="c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401" Nov 23 08:24:41 crc kubenswrapper[4681]: E1123 08:24:41.480429 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401\": container with ID starting with c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401 not found: ID does not exist" containerID="c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.480526 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401"} err="failed to get container status \"c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401\": rpc error: code = NotFound desc = could not find container \"c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401\": container with ID starting with c7a9d31309654c089bd0c799fd683b44a5acfc2446ad41b2708bcc113a1fc401 not found: ID does not exist" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.480561 4681 scope.go:117] "RemoveContainer" containerID="8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a" Nov 23 08:24:41 crc kubenswrapper[4681]: E1123 08:24:41.481023 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a\": container with ID starting with 8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a not found: ID does not exist" containerID="8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.481059 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a"} err="failed to get container status \"8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a\": rpc error: code = NotFound desc = could not find container \"8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a\": container with ID starting with 8ca1a4c5aa68a0df0d2fcbbdbdb6bc103759ff13b192b16d49a821790c35a13a not found: ID does not exist" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.481085 4681 scope.go:117] "RemoveContainer" containerID="7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504" Nov 23 08:24:41 crc kubenswrapper[4681]: E1123 08:24:41.481378 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504\": container with ID starting with 7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504 not found: ID does not exist" containerID="7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504" Nov 23 08:24:41 crc kubenswrapper[4681]: I1123 08:24:41.481484 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504"} err="failed to get container status \"7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504\": rpc error: code = NotFound desc = could not find container \"7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504\": container with ID starting with 7b67dae1a0f9fc22d1c86259238650f7a70efe9812cf0c01395575a7ea3b3504 not found: ID does not exist" Nov 23 08:24:43 crc kubenswrapper[4681]: I1123 08:24:43.262106 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" path="/var/lib/kubelet/pods/f6d34452-1579-447e-9e1d-1c1c2b5d1e58/volumes" Nov 23 08:26:12 crc kubenswrapper[4681]: I1123 08:26:12.295299 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:26:12 crc kubenswrapper[4681]: I1123 08:26:12.295938 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.096326 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5lslm"] Nov 23 08:26:24 crc kubenswrapper[4681]: E1123 08:26:24.097411 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="registry-server" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.097428 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="registry-server" Nov 23 08:26:24 crc kubenswrapper[4681]: E1123 08:26:24.097451 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="extract-content" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.097470 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="extract-content" Nov 23 08:26:24 crc kubenswrapper[4681]: E1123 08:26:24.097493 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="extract-utilities" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.097499 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="extract-utilities" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.097727 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d34452-1579-447e-9e1d-1c1c2b5d1e58" containerName="registry-server" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.099108 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.112092 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lslm"] Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.251199 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx55\" (UniqueName: \"kubernetes.io/projected/bdbf20d0-f0d8-49db-a438-effe9b418a8f-kube-api-access-dfx55\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.251303 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-utilities\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.251346 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-catalog-content\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.354817 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-utilities\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.354963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-utilities\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.355052 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-catalog-content\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.355319 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-catalog-content\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.355565 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfx55\" (UniqueName: \"kubernetes.io/projected/bdbf20d0-f0d8-49db-a438-effe9b418a8f-kube-api-access-dfx55\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.373036 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfx55\" (UniqueName: \"kubernetes.io/projected/bdbf20d0-f0d8-49db-a438-effe9b418a8f-kube-api-access-dfx55\") pod \"redhat-operators-5lslm\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.415630 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:24 crc kubenswrapper[4681]: I1123 08:26:24.879413 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lslm"] Nov 23 08:26:25 crc kubenswrapper[4681]: I1123 08:26:25.278291 4681 generic.go:334] "Generic (PLEG): container finished" podID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerID="a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a" exitCode=0 Nov 23 08:26:25 crc kubenswrapper[4681]: I1123 08:26:25.278395 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lslm" event={"ID":"bdbf20d0-f0d8-49db-a438-effe9b418a8f","Type":"ContainerDied","Data":"a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a"} Nov 23 08:26:25 crc kubenswrapper[4681]: I1123 08:26:25.278761 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lslm" event={"ID":"bdbf20d0-f0d8-49db-a438-effe9b418a8f","Type":"ContainerStarted","Data":"eace685fc64c87c810d22fec19f348d4f13536d7a9f62a912dabf548d185d428"} Nov 23 08:26:25 crc kubenswrapper[4681]: I1123 08:26:25.280311 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:26:26 crc kubenswrapper[4681]: I1123 08:26:26.291524 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lslm" event={"ID":"bdbf20d0-f0d8-49db-a438-effe9b418a8f","Type":"ContainerStarted","Data":"66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213"} Nov 23 08:26:28 crc kubenswrapper[4681]: I1123 08:26:28.314363 4681 generic.go:334] "Generic (PLEG): container finished" podID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerID="66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213" exitCode=0 Nov 23 08:26:28 crc kubenswrapper[4681]: I1123 08:26:28.314512 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lslm" event={"ID":"bdbf20d0-f0d8-49db-a438-effe9b418a8f","Type":"ContainerDied","Data":"66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213"} Nov 23 08:26:29 crc kubenswrapper[4681]: I1123 08:26:29.326432 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lslm" event={"ID":"bdbf20d0-f0d8-49db-a438-effe9b418a8f","Type":"ContainerStarted","Data":"373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8"} Nov 23 08:26:29 crc kubenswrapper[4681]: I1123 08:26:29.348611 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5lslm" podStartSLOduration=1.783284209 podStartE2EDuration="5.348593243s" podCreationTimestamp="2025-11-23 08:26:24 +0000 UTC" firstStartedPulling="2025-11-23 08:26:25.280094197 +0000 UTC m=+6122.349603434" lastFinishedPulling="2025-11-23 08:26:28.845403231 +0000 UTC m=+6125.914912468" observedRunningTime="2025-11-23 08:26:29.342838473 +0000 UTC m=+6126.412347710" watchObservedRunningTime="2025-11-23 08:26:29.348593243 +0000 UTC m=+6126.418102480" Nov 23 08:26:34 crc kubenswrapper[4681]: I1123 08:26:34.416622 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:34 crc kubenswrapper[4681]: I1123 08:26:34.417279 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:35 crc kubenswrapper[4681]: I1123 08:26:35.457227 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5lslm" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="registry-server" probeResult="failure" output=< Nov 23 08:26:35 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:26:35 crc kubenswrapper[4681]: > Nov 23 08:26:42 crc kubenswrapper[4681]: I1123 08:26:42.295408 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:26:42 crc kubenswrapper[4681]: I1123 08:26:42.296021 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:26:44 crc kubenswrapper[4681]: I1123 08:26:44.455999 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:44 crc kubenswrapper[4681]: I1123 08:26:44.503698 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:44 crc kubenswrapper[4681]: I1123 08:26:44.692709 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lslm"] Nov 23 08:26:46 crc kubenswrapper[4681]: I1123 08:26:46.477060 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5lslm" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="registry-server" containerID="cri-o://373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8" gracePeriod=2 Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.008511 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.070942 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfx55\" (UniqueName: \"kubernetes.io/projected/bdbf20d0-f0d8-49db-a438-effe9b418a8f-kube-api-access-dfx55\") pod \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.071065 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-catalog-content\") pod \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.071179 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-utilities\") pod \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\" (UID: \"bdbf20d0-f0d8-49db-a438-effe9b418a8f\") " Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.073976 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-utilities" (OuterVolumeSpecName: "utilities") pod "bdbf20d0-f0d8-49db-a438-effe9b418a8f" (UID: "bdbf20d0-f0d8-49db-a438-effe9b418a8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.082238 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbf20d0-f0d8-49db-a438-effe9b418a8f-kube-api-access-dfx55" (OuterVolumeSpecName: "kube-api-access-dfx55") pod "bdbf20d0-f0d8-49db-a438-effe9b418a8f" (UID: "bdbf20d0-f0d8-49db-a438-effe9b418a8f"). InnerVolumeSpecName "kube-api-access-dfx55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.151830 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdbf20d0-f0d8-49db-a438-effe9b418a8f" (UID: "bdbf20d0-f0d8-49db-a438-effe9b418a8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.172674 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.172705 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbf20d0-f0d8-49db-a438-effe9b418a8f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.172718 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfx55\" (UniqueName: \"kubernetes.io/projected/bdbf20d0-f0d8-49db-a438-effe9b418a8f-kube-api-access-dfx55\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.491427 4681 generic.go:334] "Generic (PLEG): container finished" podID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerID="373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8" exitCode=0 Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.491492 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lslm" event={"ID":"bdbf20d0-f0d8-49db-a438-effe9b418a8f","Type":"ContainerDied","Data":"373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8"} Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.491526 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lslm" event={"ID":"bdbf20d0-f0d8-49db-a438-effe9b418a8f","Type":"ContainerDied","Data":"eace685fc64c87c810d22fec19f348d4f13536d7a9f62a912dabf548d185d428"} Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.491544 4681 scope.go:117] "RemoveContainer" containerID="373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.491698 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lslm" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.515114 4681 scope.go:117] "RemoveContainer" containerID="66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.517629 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lslm"] Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.527864 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5lslm"] Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.537040 4681 scope.go:117] "RemoveContainer" containerID="a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.587505 4681 scope.go:117] "RemoveContainer" containerID="373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8" Nov 23 08:26:47 crc kubenswrapper[4681]: E1123 08:26:47.587870 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8\": container with ID starting with 373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8 not found: ID does not exist" containerID="373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.587905 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8"} err="failed to get container status \"373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8\": rpc error: code = NotFound desc = could not find container \"373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8\": container with ID starting with 373309e241f6c5b7ee6abc2400e29ec8a39a0aa1cffb9e1021a70ebb101c02f8 not found: ID does not exist" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.587929 4681 scope.go:117] "RemoveContainer" containerID="66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213" Nov 23 08:26:47 crc kubenswrapper[4681]: E1123 08:26:47.588292 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213\": container with ID starting with 66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213 not found: ID does not exist" containerID="66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.588315 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213"} err="failed to get container status \"66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213\": rpc error: code = NotFound desc = could not find container \"66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213\": container with ID starting with 66c0a30020d8debde959230621842c247daf65318cddbbb6336b4a5713ff8213 not found: ID does not exist" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.588329 4681 scope.go:117] "RemoveContainer" containerID="a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a" Nov 23 08:26:47 crc kubenswrapper[4681]: E1123 08:26:47.588672 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a\": container with ID starting with a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a not found: ID does not exist" containerID="a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a" Nov 23 08:26:47 crc kubenswrapper[4681]: I1123 08:26:47.588691 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a"} err="failed to get container status \"a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a\": rpc error: code = NotFound desc = could not find container \"a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a\": container with ID starting with a16f97eb796901c89affc24e814242007933c8c20451f635402c07e4921e894a not found: ID does not exist" Nov 23 08:26:49 crc kubenswrapper[4681]: I1123 08:26:49.260394 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" path="/var/lib/kubelet/pods/bdbf20d0-f0d8-49db-a438-effe9b418a8f/volumes" Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.295692 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.296395 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.296443 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.297327 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06f801f5c38a38a16a89b057559b054ed85c5e9ba9b81b998a31f582df7f4bda"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.297376 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://06f801f5c38a38a16a89b057559b054ed85c5e9ba9b81b998a31f582df7f4bda" gracePeriod=600 Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.686263 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="06f801f5c38a38a16a89b057559b054ed85c5e9ba9b81b998a31f582df7f4bda" exitCode=0 Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.686340 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"06f801f5c38a38a16a89b057559b054ed85c5e9ba9b81b998a31f582df7f4bda"} Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.686686 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f"} Nov 23 08:27:12 crc kubenswrapper[4681]: I1123 08:27:12.686714 4681 scope.go:117] "RemoveContainer" containerID="627017b2e50bb6c85944805c7f0eb614f68d81f157510d798194642ebd7c85b5" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.440172 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cls7g"] Nov 23 08:27:34 crc kubenswrapper[4681]: E1123 08:27:34.441471 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="extract-utilities" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.441486 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="extract-utilities" Nov 23 08:27:34 crc kubenswrapper[4681]: E1123 08:27:34.441517 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="registry-server" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.441523 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="registry-server" Nov 23 08:27:34 crc kubenswrapper[4681]: E1123 08:27:34.441536 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="extract-content" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.441542 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="extract-content" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.441777 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbf20d0-f0d8-49db-a438-effe9b418a8f" containerName="registry-server" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.444230 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.452193 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cls7g"] Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.575340 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-utilities\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.575548 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-catalog-content\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.575633 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6lv\" (UniqueName: \"kubernetes.io/projected/0e0ce239-b733-48d9-a95f-ea5ff900774b-kube-api-access-7k6lv\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.678380 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k6lv\" (UniqueName: \"kubernetes.io/projected/0e0ce239-b733-48d9-a95f-ea5ff900774b-kube-api-access-7k6lv\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.678924 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-utilities\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.679053 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-catalog-content\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.679490 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-utilities\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.679568 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-catalog-content\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.716633 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k6lv\" (UniqueName: \"kubernetes.io/projected/0e0ce239-b733-48d9-a95f-ea5ff900774b-kube-api-access-7k6lv\") pod \"certified-operators-cls7g\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:34 crc kubenswrapper[4681]: I1123 08:27:34.774635 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:35 crc kubenswrapper[4681]: I1123 08:27:35.358324 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cls7g"] Nov 23 08:27:35 crc kubenswrapper[4681]: I1123 08:27:35.889132 4681 generic.go:334] "Generic (PLEG): container finished" podID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerID="180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f" exitCode=0 Nov 23 08:27:35 crc kubenswrapper[4681]: I1123 08:27:35.889197 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cls7g" event={"ID":"0e0ce239-b733-48d9-a95f-ea5ff900774b","Type":"ContainerDied","Data":"180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f"} Nov 23 08:27:35 crc kubenswrapper[4681]: I1123 08:27:35.889475 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cls7g" event={"ID":"0e0ce239-b733-48d9-a95f-ea5ff900774b","Type":"ContainerStarted","Data":"af20f874c7bbc6d609f4144f86a386b90dbe2907b5163ddc406ad95f53842500"} Nov 23 08:27:36 crc kubenswrapper[4681]: I1123 08:27:36.902033 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cls7g" event={"ID":"0e0ce239-b733-48d9-a95f-ea5ff900774b","Type":"ContainerStarted","Data":"7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f"} Nov 23 08:27:37 crc kubenswrapper[4681]: I1123 08:27:37.911955 4681 generic.go:334] "Generic (PLEG): container finished" podID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerID="7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f" exitCode=0 Nov 23 08:27:37 crc kubenswrapper[4681]: I1123 08:27:37.912049 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cls7g" event={"ID":"0e0ce239-b733-48d9-a95f-ea5ff900774b","Type":"ContainerDied","Data":"7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f"} Nov 23 08:27:38 crc kubenswrapper[4681]: I1123 08:27:38.922560 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cls7g" event={"ID":"0e0ce239-b733-48d9-a95f-ea5ff900774b","Type":"ContainerStarted","Data":"58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16"} Nov 23 08:27:38 crc kubenswrapper[4681]: I1123 08:27:38.945917 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cls7g" podStartSLOduration=2.444934928 podStartE2EDuration="4.945900854s" podCreationTimestamp="2025-11-23 08:27:34 +0000 UTC" firstStartedPulling="2025-11-23 08:27:35.890738958 +0000 UTC m=+6192.960248195" lastFinishedPulling="2025-11-23 08:27:38.391704885 +0000 UTC m=+6195.461214121" observedRunningTime="2025-11-23 08:27:38.93888093 +0000 UTC m=+6196.008390167" watchObservedRunningTime="2025-11-23 08:27:38.945900854 +0000 UTC m=+6196.015410091" Nov 23 08:27:44 crc kubenswrapper[4681]: I1123 08:27:44.775588 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:44 crc kubenswrapper[4681]: I1123 08:27:44.775956 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:44 crc kubenswrapper[4681]: I1123 08:27:44.809697 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:45 crc kubenswrapper[4681]: I1123 08:27:45.007669 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:45 crc kubenswrapper[4681]: I1123 08:27:45.044308 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cls7g"] Nov 23 08:27:46 crc kubenswrapper[4681]: I1123 08:27:46.983268 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cls7g" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="registry-server" containerID="cri-o://58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16" gracePeriod=2 Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.435505 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.536002 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k6lv\" (UniqueName: \"kubernetes.io/projected/0e0ce239-b733-48d9-a95f-ea5ff900774b-kube-api-access-7k6lv\") pod \"0e0ce239-b733-48d9-a95f-ea5ff900774b\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.536060 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-utilities\") pod \"0e0ce239-b733-48d9-a95f-ea5ff900774b\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.536214 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-catalog-content\") pod \"0e0ce239-b733-48d9-a95f-ea5ff900774b\" (UID: \"0e0ce239-b733-48d9-a95f-ea5ff900774b\") " Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.537092 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-utilities" (OuterVolumeSpecName: "utilities") pod "0e0ce239-b733-48d9-a95f-ea5ff900774b" (UID: "0e0ce239-b733-48d9-a95f-ea5ff900774b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.541585 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0ce239-b733-48d9-a95f-ea5ff900774b-kube-api-access-7k6lv" (OuterVolumeSpecName: "kube-api-access-7k6lv") pod "0e0ce239-b733-48d9-a95f-ea5ff900774b" (UID: "0e0ce239-b733-48d9-a95f-ea5ff900774b"). InnerVolumeSpecName "kube-api-access-7k6lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.581098 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e0ce239-b733-48d9-a95f-ea5ff900774b" (UID: "0e0ce239-b733-48d9-a95f-ea5ff900774b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.639276 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k6lv\" (UniqueName: \"kubernetes.io/projected/0e0ce239-b733-48d9-a95f-ea5ff900774b-kube-api-access-7k6lv\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.639424 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.639515 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ce239-b733-48d9-a95f-ea5ff900774b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.994329 4681 generic.go:334] "Generic (PLEG): container finished" podID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerID="58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16" exitCode=0 Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.994446 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cls7g" event={"ID":"0e0ce239-b733-48d9-a95f-ea5ff900774b","Type":"ContainerDied","Data":"58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16"} Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.994447 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cls7g" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.994821 4681 scope.go:117] "RemoveContainer" containerID="58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16" Nov 23 08:27:47 crc kubenswrapper[4681]: I1123 08:27:47.994792 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cls7g" event={"ID":"0e0ce239-b733-48d9-a95f-ea5ff900774b","Type":"ContainerDied","Data":"af20f874c7bbc6d609f4144f86a386b90dbe2907b5163ddc406ad95f53842500"} Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.028365 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cls7g"] Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.032350 4681 scope.go:117] "RemoveContainer" containerID="7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f" Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.035728 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cls7g"] Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.053493 4681 scope.go:117] "RemoveContainer" containerID="180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f" Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.086631 4681 scope.go:117] "RemoveContainer" containerID="58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16" Nov 23 08:27:48 crc kubenswrapper[4681]: E1123 08:27:48.086954 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16\": container with ID starting with 58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16 not found: ID does not exist" containerID="58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16" Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.087062 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16"} err="failed to get container status \"58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16\": rpc error: code = NotFound desc = could not find container \"58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16\": container with ID starting with 58f9ab43720c19178dbebb2c860eab7e31527db4fb1bd30969134249a41d0a16 not found: ID does not exist" Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.087174 4681 scope.go:117] "RemoveContainer" containerID="7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f" Nov 23 08:27:48 crc kubenswrapper[4681]: E1123 08:27:48.087518 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f\": container with ID starting with 7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f not found: ID does not exist" containerID="7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f" Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.087550 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f"} err="failed to get container status \"7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f\": rpc error: code = NotFound desc = could not find container \"7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f\": container with ID starting with 7c0dd3652f6e106483b765b8463f37438e2f40d5cba914d4831f74c610a3f96f not found: ID does not exist" Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.087572 4681 scope.go:117] "RemoveContainer" containerID="180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f" Nov 23 08:27:48 crc kubenswrapper[4681]: E1123 08:27:48.087953 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f\": container with ID starting with 180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f not found: ID does not exist" containerID="180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f" Nov 23 08:27:48 crc kubenswrapper[4681]: I1123 08:27:48.087980 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f"} err="failed to get container status \"180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f\": rpc error: code = NotFound desc = could not find container \"180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f\": container with ID starting with 180cb60d354744aa756018edfb2562980c97cf8ca054a428d00472118f625a0f not found: ID does not exist" Nov 23 08:27:49 crc kubenswrapper[4681]: I1123 08:27:49.261234 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" path="/var/lib/kubelet/pods/0e0ce239-b733-48d9-a95f-ea5ff900774b/volumes" Nov 23 08:29:12 crc kubenswrapper[4681]: I1123 08:29:12.296057 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:29:12 crc kubenswrapper[4681]: I1123 08:29:12.296699 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:29:42 crc kubenswrapper[4681]: I1123 08:29:42.298204 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:29:42 crc kubenswrapper[4681]: I1123 08:29:42.298818 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.151297 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw"] Nov 23 08:30:00 crc kubenswrapper[4681]: E1123 08:30:00.152368 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="extract-utilities" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.152383 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="extract-utilities" Nov 23 08:30:00 crc kubenswrapper[4681]: E1123 08:30:00.152395 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="extract-content" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.152401 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="extract-content" Nov 23 08:30:00 crc kubenswrapper[4681]: E1123 08:30:00.152443 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="registry-server" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.152449 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="registry-server" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.152637 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0ce239-b733-48d9-a95f-ea5ff900774b" containerName="registry-server" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.153285 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.161263 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.161273 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.170891 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw"] Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.308572 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qqxg\" (UniqueName: \"kubernetes.io/projected/deed2733-f872-4d90-8f03-4fe213b28629-kube-api-access-9qqxg\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.308615 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deed2733-f872-4d90-8f03-4fe213b28629-config-volume\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.308660 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deed2733-f872-4d90-8f03-4fe213b28629-secret-volume\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.410714 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qqxg\" (UniqueName: \"kubernetes.io/projected/deed2733-f872-4d90-8f03-4fe213b28629-kube-api-access-9qqxg\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.411400 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deed2733-f872-4d90-8f03-4fe213b28629-config-volume\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.412129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deed2733-f872-4d90-8f03-4fe213b28629-config-volume\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.412270 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deed2733-f872-4d90-8f03-4fe213b28629-secret-volume\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.421200 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deed2733-f872-4d90-8f03-4fe213b28629-secret-volume\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.426988 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qqxg\" (UniqueName: \"kubernetes.io/projected/deed2733-f872-4d90-8f03-4fe213b28629-kube-api-access-9qqxg\") pod \"collect-profiles-29398110-8fcqw\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.474587 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:00 crc kubenswrapper[4681]: I1123 08:30:00.896560 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw"] Nov 23 08:30:01 crc kubenswrapper[4681]: I1123 08:30:01.120856 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" event={"ID":"deed2733-f872-4d90-8f03-4fe213b28629","Type":"ContainerStarted","Data":"c53eacf41cacc1403010f7367aebfeecef5367f05b5daf443c646e49ad576e43"} Nov 23 08:30:01 crc kubenswrapper[4681]: I1123 08:30:01.120925 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" event={"ID":"deed2733-f872-4d90-8f03-4fe213b28629","Type":"ContainerStarted","Data":"e3a2b1dca3ded97970dca20b2aa8ce160bb02aac087f6be77e2f0da3cdd82169"} Nov 23 08:30:01 crc kubenswrapper[4681]: I1123 08:30:01.141169 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" podStartSLOduration=1.141154072 podStartE2EDuration="1.141154072s" podCreationTimestamp="2025-11-23 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:30:01.135985656 +0000 UTC m=+6338.205494893" watchObservedRunningTime="2025-11-23 08:30:01.141154072 +0000 UTC m=+6338.210663300" Nov 23 08:30:02 crc kubenswrapper[4681]: I1123 08:30:02.130594 4681 generic.go:334] "Generic (PLEG): container finished" podID="deed2733-f872-4d90-8f03-4fe213b28629" containerID="c53eacf41cacc1403010f7367aebfeecef5367f05b5daf443c646e49ad576e43" exitCode=0 Nov 23 08:30:02 crc kubenswrapper[4681]: I1123 08:30:02.130678 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" event={"ID":"deed2733-f872-4d90-8f03-4fe213b28629","Type":"ContainerDied","Data":"c53eacf41cacc1403010f7367aebfeecef5367f05b5daf443c646e49ad576e43"} Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.471031 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.592108 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deed2733-f872-4d90-8f03-4fe213b28629-config-volume\") pod \"deed2733-f872-4d90-8f03-4fe213b28629\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.592165 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deed2733-f872-4d90-8f03-4fe213b28629-secret-volume\") pod \"deed2733-f872-4d90-8f03-4fe213b28629\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.592236 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qqxg\" (UniqueName: \"kubernetes.io/projected/deed2733-f872-4d90-8f03-4fe213b28629-kube-api-access-9qqxg\") pod \"deed2733-f872-4d90-8f03-4fe213b28629\" (UID: \"deed2733-f872-4d90-8f03-4fe213b28629\") " Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.592871 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deed2733-f872-4d90-8f03-4fe213b28629-config-volume" (OuterVolumeSpecName: "config-volume") pod "deed2733-f872-4d90-8f03-4fe213b28629" (UID: "deed2733-f872-4d90-8f03-4fe213b28629"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.599417 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deed2733-f872-4d90-8f03-4fe213b28629-kube-api-access-9qqxg" (OuterVolumeSpecName: "kube-api-access-9qqxg") pod "deed2733-f872-4d90-8f03-4fe213b28629" (UID: "deed2733-f872-4d90-8f03-4fe213b28629"). InnerVolumeSpecName "kube-api-access-9qqxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.599535 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deed2733-f872-4d90-8f03-4fe213b28629-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "deed2733-f872-4d90-8f03-4fe213b28629" (UID: "deed2733-f872-4d90-8f03-4fe213b28629"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.695183 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deed2733-f872-4d90-8f03-4fe213b28629-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.695231 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deed2733-f872-4d90-8f03-4fe213b28629-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:03 crc kubenswrapper[4681]: I1123 08:30:03.695242 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qqxg\" (UniqueName: \"kubernetes.io/projected/deed2733-f872-4d90-8f03-4fe213b28629-kube-api-access-9qqxg\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:04 crc kubenswrapper[4681]: I1123 08:30:04.152386 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" event={"ID":"deed2733-f872-4d90-8f03-4fe213b28629","Type":"ContainerDied","Data":"e3a2b1dca3ded97970dca20b2aa8ce160bb02aac087f6be77e2f0da3cdd82169"} Nov 23 08:30:04 crc kubenswrapper[4681]: I1123 08:30:04.152446 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8fcqw" Nov 23 08:30:04 crc kubenswrapper[4681]: I1123 08:30:04.152428 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3a2b1dca3ded97970dca20b2aa8ce160bb02aac087f6be77e2f0da3cdd82169" Nov 23 08:30:04 crc kubenswrapper[4681]: I1123 08:30:04.537287 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c"] Nov 23 08:30:04 crc kubenswrapper[4681]: I1123 08:30:04.543850 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-c786c"] Nov 23 08:30:05 crc kubenswrapper[4681]: I1123 08:30:05.260745 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90dcf73b-94ea-4db5-bae9-bc368ade1aee" path="/var/lib/kubelet/pods/90dcf73b-94ea-4db5-bae9-bc368ade1aee/volumes" Nov 23 08:30:12 crc kubenswrapper[4681]: I1123 08:30:12.295755 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:30:12 crc kubenswrapper[4681]: I1123 08:30:12.296276 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:30:12 crc kubenswrapper[4681]: I1123 08:30:12.296317 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:30:12 crc kubenswrapper[4681]: I1123 08:30:12.296728 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:30:12 crc kubenswrapper[4681]: I1123 08:30:12.296770 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" gracePeriod=600 Nov 23 08:30:12 crc kubenswrapper[4681]: E1123 08:30:12.418857 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:30:13 crc kubenswrapper[4681]: I1123 08:30:13.227091 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" exitCode=0 Nov 23 08:30:13 crc kubenswrapper[4681]: I1123 08:30:13.227143 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f"} Nov 23 08:30:13 crc kubenswrapper[4681]: I1123 08:30:13.227188 4681 scope.go:117] "RemoveContainer" containerID="06f801f5c38a38a16a89b057559b054ed85c5e9ba9b81b998a31f582df7f4bda" Nov 23 08:30:13 crc kubenswrapper[4681]: I1123 08:30:13.227786 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:30:13 crc kubenswrapper[4681]: E1123 08:30:13.228097 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:30:25 crc kubenswrapper[4681]: I1123 08:30:25.254511 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:30:25 crc kubenswrapper[4681]: E1123 08:30:25.255290 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:30:38 crc kubenswrapper[4681]: I1123 08:30:38.251548 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:30:38 crc kubenswrapper[4681]: E1123 08:30:38.252264 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:30:48 crc kubenswrapper[4681]: I1123 08:30:48.371683 4681 scope.go:117] "RemoveContainer" containerID="a9abf024ab1a36816512f3a105e69afb615696bae10e6fd2dd360ac2823da541" Nov 23 08:30:52 crc kubenswrapper[4681]: I1123 08:30:52.251476 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:30:52 crc kubenswrapper[4681]: E1123 08:30:52.253866 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:31:05 crc kubenswrapper[4681]: I1123 08:31:05.253407 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:31:05 crc kubenswrapper[4681]: E1123 08:31:05.254513 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:31:20 crc kubenswrapper[4681]: I1123 08:31:20.252354 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:31:20 crc kubenswrapper[4681]: E1123 08:31:20.252976 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:31:34 crc kubenswrapper[4681]: I1123 08:31:34.252177 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:31:34 crc kubenswrapper[4681]: E1123 08:31:34.253436 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:31:48 crc kubenswrapper[4681]: I1123 08:31:48.252210 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:31:48 crc kubenswrapper[4681]: E1123 08:31:48.253887 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:32:03 crc kubenswrapper[4681]: I1123 08:32:03.256430 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:32:03 crc kubenswrapper[4681]: E1123 08:32:03.257816 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.239229 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zrgf"] Nov 23 08:32:16 crc kubenswrapper[4681]: E1123 08:32:16.240677 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deed2733-f872-4d90-8f03-4fe213b28629" containerName="collect-profiles" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.240756 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="deed2733-f872-4d90-8f03-4fe213b28629" containerName="collect-profiles" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.241050 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="deed2733-f872-4d90-8f03-4fe213b28629" containerName="collect-profiles" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.249420 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.252677 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:32:16 crc kubenswrapper[4681]: E1123 08:32:16.252869 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.257285 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zrgf"] Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.349828 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvzw2\" (UniqueName: \"kubernetes.io/projected/2968f8f1-4192-4001-8417-8805131fcb37-kube-api-access-hvzw2\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.351817 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-catalog-content\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.351969 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-utilities\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.453418 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-catalog-content\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.453783 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-utilities\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.453984 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvzw2\" (UniqueName: \"kubernetes.io/projected/2968f8f1-4192-4001-8417-8805131fcb37-kube-api-access-hvzw2\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.454765 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-catalog-content\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.454850 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-utilities\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.470067 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvzw2\" (UniqueName: \"kubernetes.io/projected/2968f8f1-4192-4001-8417-8805131fcb37-kube-api-access-hvzw2\") pod \"community-operators-6zrgf\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:16 crc kubenswrapper[4681]: I1123 08:32:16.583119 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:17 crc kubenswrapper[4681]: I1123 08:32:17.062724 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zrgf"] Nov 23 08:32:17 crc kubenswrapper[4681]: W1123 08:32:17.075043 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2968f8f1_4192_4001_8417_8805131fcb37.slice/crio-5e69c851dc64f22533ce6d25d5700487e967984e234e2e3be72f3e81fe71d0e7 WatchSource:0}: Error finding container 5e69c851dc64f22533ce6d25d5700487e967984e234e2e3be72f3e81fe71d0e7: Status 404 returned error can't find the container with id 5e69c851dc64f22533ce6d25d5700487e967984e234e2e3be72f3e81fe71d0e7 Nov 23 08:32:17 crc kubenswrapper[4681]: I1123 08:32:17.158327 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrgf" event={"ID":"2968f8f1-4192-4001-8417-8805131fcb37","Type":"ContainerStarted","Data":"5e69c851dc64f22533ce6d25d5700487e967984e234e2e3be72f3e81fe71d0e7"} Nov 23 08:32:18 crc kubenswrapper[4681]: I1123 08:32:18.166010 4681 generic.go:334] "Generic (PLEG): container finished" podID="2968f8f1-4192-4001-8417-8805131fcb37" containerID="aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04" exitCode=0 Nov 23 08:32:18 crc kubenswrapper[4681]: I1123 08:32:18.166092 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrgf" event={"ID":"2968f8f1-4192-4001-8417-8805131fcb37","Type":"ContainerDied","Data":"aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04"} Nov 23 08:32:18 crc kubenswrapper[4681]: I1123 08:32:18.167872 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:32:19 crc kubenswrapper[4681]: I1123 08:32:19.175634 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrgf" event={"ID":"2968f8f1-4192-4001-8417-8805131fcb37","Type":"ContainerStarted","Data":"006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516"} Nov 23 08:32:20 crc kubenswrapper[4681]: I1123 08:32:20.183199 4681 generic.go:334] "Generic (PLEG): container finished" podID="2968f8f1-4192-4001-8417-8805131fcb37" containerID="006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516" exitCode=0 Nov 23 08:32:20 crc kubenswrapper[4681]: I1123 08:32:20.183390 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrgf" event={"ID":"2968f8f1-4192-4001-8417-8805131fcb37","Type":"ContainerDied","Data":"006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516"} Nov 23 08:32:21 crc kubenswrapper[4681]: I1123 08:32:21.192977 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrgf" event={"ID":"2968f8f1-4192-4001-8417-8805131fcb37","Type":"ContainerStarted","Data":"27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f"} Nov 23 08:32:21 crc kubenswrapper[4681]: I1123 08:32:21.211777 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zrgf" podStartSLOduration=2.728454376 podStartE2EDuration="5.211761745s" podCreationTimestamp="2025-11-23 08:32:16 +0000 UTC" firstStartedPulling="2025-11-23 08:32:18.167616113 +0000 UTC m=+6475.237125350" lastFinishedPulling="2025-11-23 08:32:20.650923482 +0000 UTC m=+6477.720432719" observedRunningTime="2025-11-23 08:32:21.205007923 +0000 UTC m=+6478.274517160" watchObservedRunningTime="2025-11-23 08:32:21.211761745 +0000 UTC m=+6478.281270983" Nov 23 08:32:26 crc kubenswrapper[4681]: I1123 08:32:26.583780 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:26 crc kubenswrapper[4681]: I1123 08:32:26.584339 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:26 crc kubenswrapper[4681]: I1123 08:32:26.622312 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:27 crc kubenswrapper[4681]: I1123 08:32:27.267339 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:27 crc kubenswrapper[4681]: I1123 08:32:27.305282 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zrgf"] Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.246855 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6zrgf" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="registry-server" containerID="cri-o://27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f" gracePeriod=2 Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.658702 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.796642 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvzw2\" (UniqueName: \"kubernetes.io/projected/2968f8f1-4192-4001-8417-8805131fcb37-kube-api-access-hvzw2\") pod \"2968f8f1-4192-4001-8417-8805131fcb37\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.796786 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-catalog-content\") pod \"2968f8f1-4192-4001-8417-8805131fcb37\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.796823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-utilities\") pod \"2968f8f1-4192-4001-8417-8805131fcb37\" (UID: \"2968f8f1-4192-4001-8417-8805131fcb37\") " Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.797669 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-utilities" (OuterVolumeSpecName: "utilities") pod "2968f8f1-4192-4001-8417-8805131fcb37" (UID: "2968f8f1-4192-4001-8417-8805131fcb37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.807584 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2968f8f1-4192-4001-8417-8805131fcb37-kube-api-access-hvzw2" (OuterVolumeSpecName: "kube-api-access-hvzw2") pod "2968f8f1-4192-4001-8417-8805131fcb37" (UID: "2968f8f1-4192-4001-8417-8805131fcb37"). InnerVolumeSpecName "kube-api-access-hvzw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.835026 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2968f8f1-4192-4001-8417-8805131fcb37" (UID: "2968f8f1-4192-4001-8417-8805131fcb37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.898811 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvzw2\" (UniqueName: \"kubernetes.io/projected/2968f8f1-4192-4001-8417-8805131fcb37-kube-api-access-hvzw2\") on node \"crc\" DevicePath \"\"" Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.898843 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:32:29 crc kubenswrapper[4681]: I1123 08:32:29.898853 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2968f8f1-4192-4001-8417-8805131fcb37-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.252978 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:32:30 crc kubenswrapper[4681]: E1123 08:32:30.253191 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.256211 4681 generic.go:334] "Generic (PLEG): container finished" podID="2968f8f1-4192-4001-8417-8805131fcb37" containerID="27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f" exitCode=0 Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.256250 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrgf" event={"ID":"2968f8f1-4192-4001-8417-8805131fcb37","Type":"ContainerDied","Data":"27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f"} Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.256253 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrgf" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.256280 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrgf" event={"ID":"2968f8f1-4192-4001-8417-8805131fcb37","Type":"ContainerDied","Data":"5e69c851dc64f22533ce6d25d5700487e967984e234e2e3be72f3e81fe71d0e7"} Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.256296 4681 scope.go:117] "RemoveContainer" containerID="27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.285250 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zrgf"] Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.289392 4681 scope.go:117] "RemoveContainer" containerID="006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.294019 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6zrgf"] Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.307610 4681 scope.go:117] "RemoveContainer" containerID="aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.339413 4681 scope.go:117] "RemoveContainer" containerID="27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f" Nov 23 08:32:30 crc kubenswrapper[4681]: E1123 08:32:30.339779 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f\": container with ID starting with 27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f not found: ID does not exist" containerID="27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.339815 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f"} err="failed to get container status \"27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f\": rpc error: code = NotFound desc = could not find container \"27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f\": container with ID starting with 27695dd224ea48984b1bfdc4fc96c1721b728162d0c0e8eea9393b7e66cd533f not found: ID does not exist" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.339841 4681 scope.go:117] "RemoveContainer" containerID="006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516" Nov 23 08:32:30 crc kubenswrapper[4681]: E1123 08:32:30.340152 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516\": container with ID starting with 006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516 not found: ID does not exist" containerID="006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.340181 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516"} err="failed to get container status \"006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516\": rpc error: code = NotFound desc = could not find container \"006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516\": container with ID starting with 006edffe29ad9f00d860419c64fc2d81b8658b677ff318f3172cc35c8a66c516 not found: ID does not exist" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.340206 4681 scope.go:117] "RemoveContainer" containerID="aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04" Nov 23 08:32:30 crc kubenswrapper[4681]: E1123 08:32:30.340447 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04\": container with ID starting with aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04 not found: ID does not exist" containerID="aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04" Nov 23 08:32:30 crc kubenswrapper[4681]: I1123 08:32:30.340569 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04"} err="failed to get container status \"aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04\": rpc error: code = NotFound desc = could not find container \"aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04\": container with ID starting with aef7cd8b54b46d941e02c74d7b3cce5b29d9e245ecb951e75ab2d70b239c2e04 not found: ID does not exist" Nov 23 08:32:31 crc kubenswrapper[4681]: I1123 08:32:31.260897 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2968f8f1-4192-4001-8417-8805131fcb37" path="/var/lib/kubelet/pods/2968f8f1-4192-4001-8417-8805131fcb37/volumes" Nov 23 08:32:43 crc kubenswrapper[4681]: I1123 08:32:43.256904 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:32:43 crc kubenswrapper[4681]: E1123 08:32:43.257673 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:32:58 crc kubenswrapper[4681]: I1123 08:32:58.251571 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:32:58 crc kubenswrapper[4681]: E1123 08:32:58.252316 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:33:11 crc kubenswrapper[4681]: I1123 08:33:11.252114 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:33:11 crc kubenswrapper[4681]: E1123 08:33:11.252731 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:33:24 crc kubenswrapper[4681]: I1123 08:33:24.251926 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:33:24 crc kubenswrapper[4681]: E1123 08:33:24.252439 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:33:39 crc kubenswrapper[4681]: I1123 08:33:39.252725 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:33:39 crc kubenswrapper[4681]: E1123 08:33:39.253559 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:33:54 crc kubenswrapper[4681]: I1123 08:33:54.253571 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:33:54 crc kubenswrapper[4681]: E1123 08:33:54.254954 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:34:06 crc kubenswrapper[4681]: I1123 08:34:06.252226 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:34:06 crc kubenswrapper[4681]: E1123 08:34:06.252868 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:34:19 crc kubenswrapper[4681]: I1123 08:34:19.251733 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:34:19 crc kubenswrapper[4681]: E1123 08:34:19.252445 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:34:31 crc kubenswrapper[4681]: I1123 08:34:31.252054 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:34:31 crc kubenswrapper[4681]: E1123 08:34:31.252663 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:34:43 crc kubenswrapper[4681]: I1123 08:34:43.256044 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:34:43 crc kubenswrapper[4681]: E1123 08:34:43.256571 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.780838 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-96gdt"] Nov 23 08:34:44 crc kubenswrapper[4681]: E1123 08:34:44.781325 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="extract-content" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.781337 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="extract-content" Nov 23 08:34:44 crc kubenswrapper[4681]: E1123 08:34:44.781363 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="extract-utilities" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.781368 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="extract-utilities" Nov 23 08:34:44 crc kubenswrapper[4681]: E1123 08:34:44.781380 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="registry-server" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.781385 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="registry-server" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.781592 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2968f8f1-4192-4001-8417-8805131fcb37" containerName="registry-server" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.782735 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.792664 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96gdt"] Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.908742 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-utilities\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.908852 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-catalog-content\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:44 crc kubenswrapper[4681]: I1123 08:34:44.908882 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fk2\" (UniqueName: \"kubernetes.io/projected/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-kube-api-access-l2fk2\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.010487 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-utilities\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.010588 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-catalog-content\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.010617 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2fk2\" (UniqueName: \"kubernetes.io/projected/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-kube-api-access-l2fk2\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.011269 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-utilities\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.011516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-catalog-content\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.028253 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2fk2\" (UniqueName: \"kubernetes.io/projected/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-kube-api-access-l2fk2\") pod \"redhat-marketplace-96gdt\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.101821 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:45 crc kubenswrapper[4681]: I1123 08:34:45.500570 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96gdt"] Nov 23 08:34:46 crc kubenswrapper[4681]: I1123 08:34:46.332892 4681 generic.go:334] "Generic (PLEG): container finished" podID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerID="5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437" exitCode=0 Nov 23 08:34:46 crc kubenswrapper[4681]: I1123 08:34:46.332938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96gdt" event={"ID":"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e","Type":"ContainerDied","Data":"5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437"} Nov 23 08:34:46 crc kubenswrapper[4681]: I1123 08:34:46.333110 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96gdt" event={"ID":"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e","Type":"ContainerStarted","Data":"5d7ed462baaa4564e8714fe7543d143fb6dfb8fc3ebb8aa215a33e5b2cefdb9c"} Nov 23 08:34:47 crc kubenswrapper[4681]: I1123 08:34:47.341282 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96gdt" event={"ID":"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e","Type":"ContainerStarted","Data":"6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112"} Nov 23 08:34:48 crc kubenswrapper[4681]: I1123 08:34:48.354418 4681 generic.go:334] "Generic (PLEG): container finished" podID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerID="6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112" exitCode=0 Nov 23 08:34:48 crc kubenswrapper[4681]: I1123 08:34:48.354767 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96gdt" event={"ID":"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e","Type":"ContainerDied","Data":"6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112"} Nov 23 08:34:49 crc kubenswrapper[4681]: I1123 08:34:49.361779 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96gdt" event={"ID":"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e","Type":"ContainerStarted","Data":"c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7"} Nov 23 08:34:49 crc kubenswrapper[4681]: I1123 08:34:49.376031 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-96gdt" podStartSLOduration=2.808728893 podStartE2EDuration="5.376015678s" podCreationTimestamp="2025-11-23 08:34:44 +0000 UTC" firstStartedPulling="2025-11-23 08:34:46.334366527 +0000 UTC m=+6623.403875764" lastFinishedPulling="2025-11-23 08:34:48.901653312 +0000 UTC m=+6625.971162549" observedRunningTime="2025-11-23 08:34:49.374963605 +0000 UTC m=+6626.444472842" watchObservedRunningTime="2025-11-23 08:34:49.376015678 +0000 UTC m=+6626.445524915" Nov 23 08:34:55 crc kubenswrapper[4681]: I1123 08:34:55.102598 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:55 crc kubenswrapper[4681]: I1123 08:34:55.103205 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:55 crc kubenswrapper[4681]: I1123 08:34:55.147627 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:55 crc kubenswrapper[4681]: I1123 08:34:55.438834 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:56 crc kubenswrapper[4681]: I1123 08:34:56.566354 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-96gdt"] Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.253001 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:34:57 crc kubenswrapper[4681]: E1123 08:34:57.254112 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.416932 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-96gdt" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="registry-server" containerID="cri-o://c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7" gracePeriod=2 Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.868237 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.965252 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2fk2\" (UniqueName: \"kubernetes.io/projected/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-kube-api-access-l2fk2\") pod \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.965442 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-utilities\") pod \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.965805 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-catalog-content\") pod \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\" (UID: \"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e\") " Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.966756 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-utilities" (OuterVolumeSpecName: "utilities") pod "f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" (UID: "f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.967333 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.976049 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-kube-api-access-l2fk2" (OuterVolumeSpecName: "kube-api-access-l2fk2") pod "f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" (UID: "f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e"). InnerVolumeSpecName "kube-api-access-l2fk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:34:57 crc kubenswrapper[4681]: I1123 08:34:57.983940 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" (UID: "f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.068835 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.068870 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2fk2\" (UniqueName: \"kubernetes.io/projected/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e-kube-api-access-l2fk2\") on node \"crc\" DevicePath \"\"" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.431415 4681 generic.go:334] "Generic (PLEG): container finished" podID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerID="c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7" exitCode=0 Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.431514 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96gdt" event={"ID":"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e","Type":"ContainerDied","Data":"c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7"} Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.431579 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96gdt" event={"ID":"f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e","Type":"ContainerDied","Data":"5d7ed462baaa4564e8714fe7543d143fb6dfb8fc3ebb8aa215a33e5b2cefdb9c"} Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.431591 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96gdt" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.431601 4681 scope.go:117] "RemoveContainer" containerID="c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.455550 4681 scope.go:117] "RemoveContainer" containerID="6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.460859 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-96gdt"] Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.471981 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-96gdt"] Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.484570 4681 scope.go:117] "RemoveContainer" containerID="5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.516212 4681 scope.go:117] "RemoveContainer" containerID="c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7" Nov 23 08:34:58 crc kubenswrapper[4681]: E1123 08:34:58.516867 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7\": container with ID starting with c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7 not found: ID does not exist" containerID="c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.516939 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7"} err="failed to get container status \"c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7\": rpc error: code = NotFound desc = could not find container \"c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7\": container with ID starting with c4c784a388d29bc077e5e96b6b666fd6d00613ff56ae04e830daa18aa0e903f7 not found: ID does not exist" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.516984 4681 scope.go:117] "RemoveContainer" containerID="6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112" Nov 23 08:34:58 crc kubenswrapper[4681]: E1123 08:34:58.517435 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112\": container with ID starting with 6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112 not found: ID does not exist" containerID="6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.517501 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112"} err="failed to get container status \"6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112\": rpc error: code = NotFound desc = could not find container \"6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112\": container with ID starting with 6138b5edf4a0df92a51af3d39ad2c2b7780118b9a57b2081e21213d2877a4112 not found: ID does not exist" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.517526 4681 scope.go:117] "RemoveContainer" containerID="5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437" Nov 23 08:34:58 crc kubenswrapper[4681]: E1123 08:34:58.517942 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437\": container with ID starting with 5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437 not found: ID does not exist" containerID="5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437" Nov 23 08:34:58 crc kubenswrapper[4681]: I1123 08:34:58.517993 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437"} err="failed to get container status \"5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437\": rpc error: code = NotFound desc = could not find container \"5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437\": container with ID starting with 5a3926b272a5ebe85fe3c14a5ca7b3972f71cf5e59a70376d3aa1ee42e3c8437 not found: ID does not exist" Nov 23 08:34:59 crc kubenswrapper[4681]: I1123 08:34:59.261899 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" path="/var/lib/kubelet/pods/f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e/volumes" Nov 23 08:35:12 crc kubenswrapper[4681]: I1123 08:35:12.251835 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:35:12 crc kubenswrapper[4681]: E1123 08:35:12.252656 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:35:26 crc kubenswrapper[4681]: I1123 08:35:26.252603 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:35:26 crc kubenswrapper[4681]: I1123 08:35:26.662672 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"3d5c43d1b685f76a1e40e499cd61d16928c594dd48a6e09e738eedc2d905cbf2"} Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.443683 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jpm5l"] Nov 23 08:36:40 crc kubenswrapper[4681]: E1123 08:36:40.444880 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="extract-utilities" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.444898 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="extract-utilities" Nov 23 08:36:40 crc kubenswrapper[4681]: E1123 08:36:40.444918 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="extract-content" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.444924 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="extract-content" Nov 23 08:36:40 crc kubenswrapper[4681]: E1123 08:36:40.444972 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="registry-server" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.444979 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="registry-server" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.445239 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c9c5ee-8f33-46b0-8c62-a9b7211abc2e" containerName="registry-server" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.446881 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.449015 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpm5l"] Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.549129 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-catalog-content\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.549656 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-utilities\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.549828 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhlx\" (UniqueName: \"kubernetes.io/projected/25c79d4b-4918-44a7-a8f2-687d2d578409-kube-api-access-dvhlx\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.652313 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-utilities\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.652421 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvhlx\" (UniqueName: \"kubernetes.io/projected/25c79d4b-4918-44a7-a8f2-687d2d578409-kube-api-access-dvhlx\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.652560 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-catalog-content\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.652865 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-utilities\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.653017 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-catalog-content\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.680563 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvhlx\" (UniqueName: \"kubernetes.io/projected/25c79d4b-4918-44a7-a8f2-687d2d578409-kube-api-access-dvhlx\") pod \"redhat-operators-jpm5l\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:40 crc kubenswrapper[4681]: I1123 08:36:40.769366 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:41 crc kubenswrapper[4681]: I1123 08:36:41.225398 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpm5l"] Nov 23 08:36:41 crc kubenswrapper[4681]: I1123 08:36:41.275854 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpm5l" event={"ID":"25c79d4b-4918-44a7-a8f2-687d2d578409","Type":"ContainerStarted","Data":"e6763a3430e7d3c295f8521df11e51a6075cde2e9033259bdff4cfad46e74423"} Nov 23 08:36:42 crc kubenswrapper[4681]: I1123 08:36:42.290527 4681 generic.go:334] "Generic (PLEG): container finished" podID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerID="c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f" exitCode=0 Nov 23 08:36:42 crc kubenswrapper[4681]: I1123 08:36:42.290913 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpm5l" event={"ID":"25c79d4b-4918-44a7-a8f2-687d2d578409","Type":"ContainerDied","Data":"c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f"} Nov 23 08:36:43 crc kubenswrapper[4681]: I1123 08:36:43.312564 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpm5l" event={"ID":"25c79d4b-4918-44a7-a8f2-687d2d578409","Type":"ContainerStarted","Data":"2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec"} Nov 23 08:36:45 crc kubenswrapper[4681]: I1123 08:36:45.335715 4681 generic.go:334] "Generic (PLEG): container finished" podID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerID="2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec" exitCode=0 Nov 23 08:36:45 crc kubenswrapper[4681]: I1123 08:36:45.335784 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpm5l" event={"ID":"25c79d4b-4918-44a7-a8f2-687d2d578409","Type":"ContainerDied","Data":"2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec"} Nov 23 08:36:46 crc kubenswrapper[4681]: I1123 08:36:46.347222 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpm5l" event={"ID":"25c79d4b-4918-44a7-a8f2-687d2d578409","Type":"ContainerStarted","Data":"cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a"} Nov 23 08:36:46 crc kubenswrapper[4681]: I1123 08:36:46.369077 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jpm5l" podStartSLOduration=2.719820299 podStartE2EDuration="6.369058966s" podCreationTimestamp="2025-11-23 08:36:40 +0000 UTC" firstStartedPulling="2025-11-23 08:36:42.293537941 +0000 UTC m=+6739.363047178" lastFinishedPulling="2025-11-23 08:36:45.942776607 +0000 UTC m=+6743.012285845" observedRunningTime="2025-11-23 08:36:46.366321788 +0000 UTC m=+6743.435831015" watchObservedRunningTime="2025-11-23 08:36:46.369058966 +0000 UTC m=+6743.438568204" Nov 23 08:36:50 crc kubenswrapper[4681]: I1123 08:36:50.770265 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:50 crc kubenswrapper[4681]: I1123 08:36:50.770891 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:36:51 crc kubenswrapper[4681]: I1123 08:36:51.812648 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jpm5l" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="registry-server" probeResult="failure" output=< Nov 23 08:36:51 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:36:51 crc kubenswrapper[4681]: > Nov 23 08:37:00 crc kubenswrapper[4681]: I1123 08:37:00.812730 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:37:00 crc kubenswrapper[4681]: I1123 08:37:00.857977 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:37:01 crc kubenswrapper[4681]: I1123 08:37:01.048557 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpm5l"] Nov 23 08:37:02 crc kubenswrapper[4681]: I1123 08:37:02.473887 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jpm5l" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="registry-server" containerID="cri-o://cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a" gracePeriod=2 Nov 23 08:37:02 crc kubenswrapper[4681]: I1123 08:37:02.992742 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.194283 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-catalog-content\") pod \"25c79d4b-4918-44a7-a8f2-687d2d578409\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.194667 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvhlx\" (UniqueName: \"kubernetes.io/projected/25c79d4b-4918-44a7-a8f2-687d2d578409-kube-api-access-dvhlx\") pod \"25c79d4b-4918-44a7-a8f2-687d2d578409\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.195118 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-utilities\") pod \"25c79d4b-4918-44a7-a8f2-687d2d578409\" (UID: \"25c79d4b-4918-44a7-a8f2-687d2d578409\") " Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.195657 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-utilities" (OuterVolumeSpecName: "utilities") pod "25c79d4b-4918-44a7-a8f2-687d2d578409" (UID: "25c79d4b-4918-44a7-a8f2-687d2d578409"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.196169 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.203681 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c79d4b-4918-44a7-a8f2-687d2d578409-kube-api-access-dvhlx" (OuterVolumeSpecName: "kube-api-access-dvhlx") pod "25c79d4b-4918-44a7-a8f2-687d2d578409" (UID: "25c79d4b-4918-44a7-a8f2-687d2d578409"). InnerVolumeSpecName "kube-api-access-dvhlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.261342 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25c79d4b-4918-44a7-a8f2-687d2d578409" (UID: "25c79d4b-4918-44a7-a8f2-687d2d578409"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.298520 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c79d4b-4918-44a7-a8f2-687d2d578409-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.298550 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvhlx\" (UniqueName: \"kubernetes.io/projected/25c79d4b-4918-44a7-a8f2-687d2d578409-kube-api-access-dvhlx\") on node \"crc\" DevicePath \"\"" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.486757 4681 generic.go:334] "Generic (PLEG): container finished" podID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerID="cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a" exitCode=0 Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.486806 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpm5l" event={"ID":"25c79d4b-4918-44a7-a8f2-687d2d578409","Type":"ContainerDied","Data":"cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a"} Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.486829 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpm5l" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.486845 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpm5l" event={"ID":"25c79d4b-4918-44a7-a8f2-687d2d578409","Type":"ContainerDied","Data":"e6763a3430e7d3c295f8521df11e51a6075cde2e9033259bdff4cfad46e74423"} Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.486865 4681 scope.go:117] "RemoveContainer" containerID="cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.507150 4681 scope.go:117] "RemoveContainer" containerID="2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.519343 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpm5l"] Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.529137 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jpm5l"] Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.530286 4681 scope.go:117] "RemoveContainer" containerID="c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.561696 4681 scope.go:117] "RemoveContainer" containerID="cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a" Nov 23 08:37:03 crc kubenswrapper[4681]: E1123 08:37:03.562068 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a\": container with ID starting with cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a not found: ID does not exist" containerID="cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.562101 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a"} err="failed to get container status \"cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a\": rpc error: code = NotFound desc = could not find container \"cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a\": container with ID starting with cc67e3a49b1e93241768575a7716cea68b76281f9f9eedd527bef618d8a4cb5a not found: ID does not exist" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.562125 4681 scope.go:117] "RemoveContainer" containerID="2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec" Nov 23 08:37:03 crc kubenswrapper[4681]: E1123 08:37:03.562576 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec\": container with ID starting with 2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec not found: ID does not exist" containerID="2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.562602 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec"} err="failed to get container status \"2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec\": rpc error: code = NotFound desc = could not find container \"2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec\": container with ID starting with 2fe73a789c62eca8a4ef188980e0ca52c2da9736c0329770c7b20092ca2daaec not found: ID does not exist" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.562617 4681 scope.go:117] "RemoveContainer" containerID="c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f" Nov 23 08:37:03 crc kubenswrapper[4681]: E1123 08:37:03.563053 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f\": container with ID starting with c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f not found: ID does not exist" containerID="c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f" Nov 23 08:37:03 crc kubenswrapper[4681]: I1123 08:37:03.563075 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f"} err="failed to get container status \"c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f\": rpc error: code = NotFound desc = could not find container \"c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f\": container with ID starting with c7abbbe38b51b2aee8b9ac4d5e165024f84526a7dd68cbdd6b847e18fcc3ef0f not found: ID does not exist" Nov 23 08:37:05 crc kubenswrapper[4681]: I1123 08:37:05.262510 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" path="/var/lib/kubelet/pods/25c79d4b-4918-44a7-a8f2-687d2d578409/volumes" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.460051 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lzxln"] Nov 23 08:37:35 crc kubenswrapper[4681]: E1123 08:37:35.468668 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="extract-content" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.468694 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="extract-content" Nov 23 08:37:35 crc kubenswrapper[4681]: E1123 08:37:35.468740 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="extract-utilities" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.468748 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="extract-utilities" Nov 23 08:37:35 crc kubenswrapper[4681]: E1123 08:37:35.468759 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="registry-server" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.468764 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="registry-server" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.469281 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c79d4b-4918-44a7-a8f2-687d2d578409" containerName="registry-server" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.478073 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.515946 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8w9p\" (UniqueName: \"kubernetes.io/projected/43b55d6b-627d-4677-951f-724d5df54786-kube-api-access-z8w9p\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.516262 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-catalog-content\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.519809 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-utilities\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.547965 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzxln"] Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.621761 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-utilities\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.621942 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8w9p\" (UniqueName: \"kubernetes.io/projected/43b55d6b-627d-4677-951f-724d5df54786-kube-api-access-z8w9p\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.622085 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-catalog-content\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.622341 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-utilities\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.622542 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-catalog-content\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.649267 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8w9p\" (UniqueName: \"kubernetes.io/projected/43b55d6b-627d-4677-951f-724d5df54786-kube-api-access-z8w9p\") pod \"certified-operators-lzxln\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:35 crc kubenswrapper[4681]: I1123 08:37:35.806432 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:36 crc kubenswrapper[4681]: I1123 08:37:36.286202 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzxln"] Nov 23 08:37:36 crc kubenswrapper[4681]: I1123 08:37:36.786115 4681 generic.go:334] "Generic (PLEG): container finished" podID="43b55d6b-627d-4677-951f-724d5df54786" containerID="17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12" exitCode=0 Nov 23 08:37:36 crc kubenswrapper[4681]: I1123 08:37:36.786222 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzxln" event={"ID":"43b55d6b-627d-4677-951f-724d5df54786","Type":"ContainerDied","Data":"17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12"} Nov 23 08:37:36 crc kubenswrapper[4681]: I1123 08:37:36.786383 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzxln" event={"ID":"43b55d6b-627d-4677-951f-724d5df54786","Type":"ContainerStarted","Data":"226aec4a1179a81715025fed1f5884becd0d9e7d41f62e27e39146212a2a56d8"} Nov 23 08:37:36 crc kubenswrapper[4681]: I1123 08:37:36.789772 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:37:37 crc kubenswrapper[4681]: I1123 08:37:37.800501 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzxln" event={"ID":"43b55d6b-627d-4677-951f-724d5df54786","Type":"ContainerStarted","Data":"8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05"} Nov 23 08:37:38 crc kubenswrapper[4681]: I1123 08:37:38.814099 4681 generic.go:334] "Generic (PLEG): container finished" podID="43b55d6b-627d-4677-951f-724d5df54786" containerID="8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05" exitCode=0 Nov 23 08:37:38 crc kubenswrapper[4681]: I1123 08:37:38.814152 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzxln" event={"ID":"43b55d6b-627d-4677-951f-724d5df54786","Type":"ContainerDied","Data":"8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05"} Nov 23 08:37:39 crc kubenswrapper[4681]: I1123 08:37:39.826847 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzxln" event={"ID":"43b55d6b-627d-4677-951f-724d5df54786","Type":"ContainerStarted","Data":"7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17"} Nov 23 08:37:39 crc kubenswrapper[4681]: I1123 08:37:39.847923 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lzxln" podStartSLOduration=2.283686923 podStartE2EDuration="4.847900755s" podCreationTimestamp="2025-11-23 08:37:35 +0000 UTC" firstStartedPulling="2025-11-23 08:37:36.787807713 +0000 UTC m=+6793.857316950" lastFinishedPulling="2025-11-23 08:37:39.352021544 +0000 UTC m=+6796.421530782" observedRunningTime="2025-11-23 08:37:39.846627185 +0000 UTC m=+6796.916136413" watchObservedRunningTime="2025-11-23 08:37:39.847900755 +0000 UTC m=+6796.917409992" Nov 23 08:37:42 crc kubenswrapper[4681]: I1123 08:37:42.295834 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:37:42 crc kubenswrapper[4681]: I1123 08:37:42.296617 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:37:45 crc kubenswrapper[4681]: I1123 08:37:45.807291 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:45 crc kubenswrapper[4681]: I1123 08:37:45.808112 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:45 crc kubenswrapper[4681]: I1123 08:37:45.850418 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:45 crc kubenswrapper[4681]: I1123 08:37:45.927441 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:46 crc kubenswrapper[4681]: I1123 08:37:46.085547 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzxln"] Nov 23 08:37:47 crc kubenswrapper[4681]: I1123 08:37:47.925844 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lzxln" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="registry-server" containerID="cri-o://7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17" gracePeriod=2 Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.530834 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.667050 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8w9p\" (UniqueName: \"kubernetes.io/projected/43b55d6b-627d-4677-951f-724d5df54786-kube-api-access-z8w9p\") pod \"43b55d6b-627d-4677-951f-724d5df54786\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.667420 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-catalog-content\") pod \"43b55d6b-627d-4677-951f-724d5df54786\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.667724 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-utilities\") pod \"43b55d6b-627d-4677-951f-724d5df54786\" (UID: \"43b55d6b-627d-4677-951f-724d5df54786\") " Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.668299 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-utilities" (OuterVolumeSpecName: "utilities") pod "43b55d6b-627d-4677-951f-724d5df54786" (UID: "43b55d6b-627d-4677-951f-724d5df54786"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.674939 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b55d6b-627d-4677-951f-724d5df54786-kube-api-access-z8w9p" (OuterVolumeSpecName: "kube-api-access-z8w9p") pod "43b55d6b-627d-4677-951f-724d5df54786" (UID: "43b55d6b-627d-4677-951f-724d5df54786"). InnerVolumeSpecName "kube-api-access-z8w9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.702794 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43b55d6b-627d-4677-951f-724d5df54786" (UID: "43b55d6b-627d-4677-951f-724d5df54786"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.770442 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8w9p\" (UniqueName: \"kubernetes.io/projected/43b55d6b-627d-4677-951f-724d5df54786-kube-api-access-z8w9p\") on node \"crc\" DevicePath \"\"" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.770486 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.770495 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b55d6b-627d-4677-951f-724d5df54786-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.938062 4681 generic.go:334] "Generic (PLEG): container finished" podID="43b55d6b-627d-4677-951f-724d5df54786" containerID="7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17" exitCode=0 Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.938133 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzxln" event={"ID":"43b55d6b-627d-4677-951f-724d5df54786","Type":"ContainerDied","Data":"7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17"} Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.938177 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzxln" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.938210 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzxln" event={"ID":"43b55d6b-627d-4677-951f-724d5df54786","Type":"ContainerDied","Data":"226aec4a1179a81715025fed1f5884becd0d9e7d41f62e27e39146212a2a56d8"} Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.938236 4681 scope.go:117] "RemoveContainer" containerID="7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.971752 4681 scope.go:117] "RemoveContainer" containerID="8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05" Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.973267 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzxln"] Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.982300 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lzxln"] Nov 23 08:37:48 crc kubenswrapper[4681]: I1123 08:37:48.996661 4681 scope.go:117] "RemoveContainer" containerID="17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12" Nov 23 08:37:49 crc kubenswrapper[4681]: I1123 08:37:49.023440 4681 scope.go:117] "RemoveContainer" containerID="7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17" Nov 23 08:37:49 crc kubenswrapper[4681]: E1123 08:37:49.023940 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17\": container with ID starting with 7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17 not found: ID does not exist" containerID="7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17" Nov 23 08:37:49 crc kubenswrapper[4681]: I1123 08:37:49.023983 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17"} err="failed to get container status \"7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17\": rpc error: code = NotFound desc = could not find container \"7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17\": container with ID starting with 7af705ef96b839ec9820dfc8ca89b324286f5d5a7188d080da699f58a695eb17 not found: ID does not exist" Nov 23 08:37:49 crc kubenswrapper[4681]: I1123 08:37:49.024010 4681 scope.go:117] "RemoveContainer" containerID="8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05" Nov 23 08:37:49 crc kubenswrapper[4681]: E1123 08:37:49.024297 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05\": container with ID starting with 8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05 not found: ID does not exist" containerID="8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05" Nov 23 08:37:49 crc kubenswrapper[4681]: I1123 08:37:49.024381 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05"} err="failed to get container status \"8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05\": rpc error: code = NotFound desc = could not find container \"8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05\": container with ID starting with 8835661a1a7515bc6b066b3579fdcda80a662d5c875c313e844d1d273efbef05 not found: ID does not exist" Nov 23 08:37:49 crc kubenswrapper[4681]: I1123 08:37:49.024473 4681 scope.go:117] "RemoveContainer" containerID="17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12" Nov 23 08:37:49 crc kubenswrapper[4681]: E1123 08:37:49.024761 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12\": container with ID starting with 17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12 not found: ID does not exist" containerID="17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12" Nov 23 08:37:49 crc kubenswrapper[4681]: I1123 08:37:49.024788 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12"} err="failed to get container status \"17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12\": rpc error: code = NotFound desc = could not find container \"17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12\": container with ID starting with 17b59fa508530f9b5b32b4dcb18ed041f3280dc53e76845aba21f467dbcbbc12 not found: ID does not exist" Nov 23 08:37:49 crc kubenswrapper[4681]: I1123 08:37:49.262493 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b55d6b-627d-4677-951f-724d5df54786" path="/var/lib/kubelet/pods/43b55d6b-627d-4677-951f-724d5df54786/volumes" Nov 23 08:38:12 crc kubenswrapper[4681]: I1123 08:38:12.296272 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:38:12 crc kubenswrapper[4681]: I1123 08:38:12.296975 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:38:17 crc kubenswrapper[4681]: I1123 08:38:17.818739 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b5c677f69-bv86z"] Nov 23 08:38:17 crc kubenswrapper[4681]: E1123 08:38:17.820544 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="extract-utilities" Nov 23 08:38:17 crc kubenswrapper[4681]: I1123 08:38:17.820640 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="extract-utilities" Nov 23 08:38:17 crc kubenswrapper[4681]: E1123 08:38:17.820716 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="registry-server" Nov 23 08:38:17 crc kubenswrapper[4681]: I1123 08:38:17.820771 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="registry-server" Nov 23 08:38:17 crc kubenswrapper[4681]: E1123 08:38:17.820829 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="extract-content" Nov 23 08:38:17 crc kubenswrapper[4681]: I1123 08:38:17.820883 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="extract-content" Nov 23 08:38:17 crc kubenswrapper[4681]: I1123 08:38:17.821144 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b55d6b-627d-4677-951f-724d5df54786" containerName="registry-server" Nov 23 08:38:17 crc kubenswrapper[4681]: I1123 08:38:17.822168 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:17 crc kubenswrapper[4681]: I1123 08:38:17.887318 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5c677f69-bv86z"] Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.009816 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-config\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.010736 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-combined-ca-bundle\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.010915 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-httpd-config\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.011062 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-public-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.011244 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-internal-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.011442 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfz5\" (UniqueName: \"kubernetes.io/projected/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-kube-api-access-bzfz5\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.011561 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-ovndb-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.113882 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-internal-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.113989 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfz5\" (UniqueName: \"kubernetes.io/projected/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-kube-api-access-bzfz5\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.114017 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-ovndb-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.114068 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-config\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.114087 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-combined-ca-bundle\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.114121 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-httpd-config\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.114149 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-public-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.124378 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-public-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.125672 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-internal-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.127133 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-httpd-config\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.128073 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-ovndb-tls-certs\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.130905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-config\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.137497 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-combined-ca-bundle\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.138268 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfz5\" (UniqueName: \"kubernetes.io/projected/4e7d50ec-8e67-493b-85b1-9fb5e72aef69-kube-api-access-bzfz5\") pod \"neutron-b5c677f69-bv86z\" (UID: \"4e7d50ec-8e67-493b-85b1-9fb5e72aef69\") " pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.140726 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:18 crc kubenswrapper[4681]: I1123 08:38:18.835849 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5c677f69-bv86z"] Nov 23 08:38:19 crc kubenswrapper[4681]: I1123 08:38:19.197015 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5c677f69-bv86z" event={"ID":"4e7d50ec-8e67-493b-85b1-9fb5e72aef69","Type":"ContainerStarted","Data":"f99ca2051c5dc8b1ca4b2de2ca5510cab6793f1437414427b5b29946208b1d75"} Nov 23 08:38:19 crc kubenswrapper[4681]: I1123 08:38:19.197295 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5c677f69-bv86z" event={"ID":"4e7d50ec-8e67-493b-85b1-9fb5e72aef69","Type":"ContainerStarted","Data":"d5aefc3a0da29761583c125a0402d5dd5b7b6bf1a9a2d53233f6023244f5f150"} Nov 23 08:38:20 crc kubenswrapper[4681]: I1123 08:38:20.207923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5c677f69-bv86z" event={"ID":"4e7d50ec-8e67-493b-85b1-9fb5e72aef69","Type":"ContainerStarted","Data":"4b5643eeca9ae9a75147bc342ccb98c780d9d0d4ee40278eb52246cf7a898a6d"} Nov 23 08:38:20 crc kubenswrapper[4681]: I1123 08:38:20.208451 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:20 crc kubenswrapper[4681]: I1123 08:38:20.237308 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b5c677f69-bv86z" podStartSLOduration=3.237281724 podStartE2EDuration="3.237281724s" podCreationTimestamp="2025-11-23 08:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:38:20.228345691 +0000 UTC m=+6837.297854918" watchObservedRunningTime="2025-11-23 08:38:20.237281724 +0000 UTC m=+6837.306790961" Nov 23 08:38:42 crc kubenswrapper[4681]: I1123 08:38:42.295741 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:38:42 crc kubenswrapper[4681]: I1123 08:38:42.296478 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:38:42 crc kubenswrapper[4681]: I1123 08:38:42.296533 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:38:42 crc kubenswrapper[4681]: I1123 08:38:42.297711 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d5c43d1b685f76a1e40e499cd61d16928c594dd48a6e09e738eedc2d905cbf2"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:38:42 crc kubenswrapper[4681]: I1123 08:38:42.297764 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://3d5c43d1b685f76a1e40e499cd61d16928c594dd48a6e09e738eedc2d905cbf2" gracePeriod=600 Nov 23 08:38:43 crc kubenswrapper[4681]: I1123 08:38:43.444551 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"3d5c43d1b685f76a1e40e499cd61d16928c594dd48a6e09e738eedc2d905cbf2"} Nov 23 08:38:43 crc kubenswrapper[4681]: I1123 08:38:43.445244 4681 scope.go:117] "RemoveContainer" containerID="34c49cc0a591c6d6df0c15f5eb83c1e233310e3a956b7aeb015f20e28800ec3f" Nov 23 08:38:43 crc kubenswrapper[4681]: I1123 08:38:43.444555 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="3d5c43d1b685f76a1e40e499cd61d16928c594dd48a6e09e738eedc2d905cbf2" exitCode=0 Nov 23 08:38:43 crc kubenswrapper[4681]: I1123 08:38:43.446012 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1"} Nov 23 08:38:48 crc kubenswrapper[4681]: I1123 08:38:48.154376 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b5c677f69-bv86z" Nov 23 08:38:48 crc kubenswrapper[4681]: I1123 08:38:48.296855 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8446865c9c-85t9f"] Nov 23 08:38:48 crc kubenswrapper[4681]: I1123 08:38:48.311562 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8446865c9c-85t9f" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-httpd" containerID="cri-o://5be9ab07f26650aafc16d69633ae0efc00959486b7a51310dd16c47928854a8e" gracePeriod=30 Nov 23 08:38:48 crc kubenswrapper[4681]: I1123 08:38:48.314127 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8446865c9c-85t9f" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-api" containerID="cri-o://d2ce3bcb4a92e86f827d4c5d87ff1fed790729428a181a696cdb2bac550c8b21" gracePeriod=30 Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.505953 4681 generic.go:334] "Generic (PLEG): container finished" podID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerID="5be9ab07f26650aafc16d69633ae0efc00959486b7a51310dd16c47928854a8e" exitCode=0 Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.506269 4681 generic.go:334] "Generic (PLEG): container finished" podID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerID="d2ce3bcb4a92e86f827d4c5d87ff1fed790729428a181a696cdb2bac550c8b21" exitCode=0 Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.506030 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8446865c9c-85t9f" event={"ID":"6a2cd6a8-e146-4a72-a522-debbf8b61731","Type":"ContainerDied","Data":"5be9ab07f26650aafc16d69633ae0efc00959486b7a51310dd16c47928854a8e"} Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.506313 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8446865c9c-85t9f" event={"ID":"6a2cd6a8-e146-4a72-a522-debbf8b61731","Type":"ContainerDied","Data":"d2ce3bcb4a92e86f827d4c5d87ff1fed790729428a181a696cdb2bac550c8b21"} Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.636828 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8446865c9c-85t9f" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.759973 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-combined-ca-bundle\") pod \"6a2cd6a8-e146-4a72-a522-debbf8b61731\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.760728 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-config\") pod \"6a2cd6a8-e146-4a72-a522-debbf8b61731\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.760927 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-ovndb-tls-certs\") pod \"6a2cd6a8-e146-4a72-a522-debbf8b61731\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.760957 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smdls\" (UniqueName: \"kubernetes.io/projected/6a2cd6a8-e146-4a72-a522-debbf8b61731-kube-api-access-smdls\") pod \"6a2cd6a8-e146-4a72-a522-debbf8b61731\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.761156 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-public-tls-certs\") pod \"6a2cd6a8-e146-4a72-a522-debbf8b61731\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.761428 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-internal-tls-certs\") pod \"6a2cd6a8-e146-4a72-a522-debbf8b61731\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.761452 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-httpd-config\") pod \"6a2cd6a8-e146-4a72-a522-debbf8b61731\" (UID: \"6a2cd6a8-e146-4a72-a522-debbf8b61731\") " Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.769851 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6a2cd6a8-e146-4a72-a522-debbf8b61731" (UID: "6a2cd6a8-e146-4a72-a522-debbf8b61731"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.772420 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2cd6a8-e146-4a72-a522-debbf8b61731-kube-api-access-smdls" (OuterVolumeSpecName: "kube-api-access-smdls") pod "6a2cd6a8-e146-4a72-a522-debbf8b61731" (UID: "6a2cd6a8-e146-4a72-a522-debbf8b61731"). InnerVolumeSpecName "kube-api-access-smdls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.803580 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6a2cd6a8-e146-4a72-a522-debbf8b61731" (UID: "6a2cd6a8-e146-4a72-a522-debbf8b61731"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.808157 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-config" (OuterVolumeSpecName: "config") pod "6a2cd6a8-e146-4a72-a522-debbf8b61731" (UID: "6a2cd6a8-e146-4a72-a522-debbf8b61731"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.823955 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a2cd6a8-e146-4a72-a522-debbf8b61731" (UID: "6a2cd6a8-e146-4a72-a522-debbf8b61731"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.829470 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6a2cd6a8-e146-4a72-a522-debbf8b61731" (UID: "6a2cd6a8-e146-4a72-a522-debbf8b61731"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.836034 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6a2cd6a8-e146-4a72-a522-debbf8b61731" (UID: "6a2cd6a8-e146-4a72-a522-debbf8b61731"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.865392 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.865424 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.865436 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.865445 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.865472 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.865483 4681 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2cd6a8-e146-4a72-a522-debbf8b61731-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:49 crc kubenswrapper[4681]: I1123 08:38:49.865492 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smdls\" (UniqueName: \"kubernetes.io/projected/6a2cd6a8-e146-4a72-a522-debbf8b61731-kube-api-access-smdls\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:50 crc kubenswrapper[4681]: I1123 08:38:50.519671 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8446865c9c-85t9f" event={"ID":"6a2cd6a8-e146-4a72-a522-debbf8b61731","Type":"ContainerDied","Data":"26df9bbe6e996fd9339afb6fa3bb58b05803e562e384a19e42fd28c7aee89afc"} Nov 23 08:38:50 crc kubenswrapper[4681]: I1123 08:38:50.520050 4681 scope.go:117] "RemoveContainer" containerID="5be9ab07f26650aafc16d69633ae0efc00959486b7a51310dd16c47928854a8e" Nov 23 08:38:50 crc kubenswrapper[4681]: I1123 08:38:50.520205 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8446865c9c-85t9f" Nov 23 08:38:50 crc kubenswrapper[4681]: I1123 08:38:50.556408 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8446865c9c-85t9f"] Nov 23 08:38:50 crc kubenswrapper[4681]: I1123 08:38:50.563037 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8446865c9c-85t9f"] Nov 23 08:38:50 crc kubenswrapper[4681]: I1123 08:38:50.563335 4681 scope.go:117] "RemoveContainer" containerID="d2ce3bcb4a92e86f827d4c5d87ff1fed790729428a181a696cdb2bac550c8b21" Nov 23 08:38:51 crc kubenswrapper[4681]: I1123 08:38:51.267797 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" path="/var/lib/kubelet/pods/6a2cd6a8-e146-4a72-a522-debbf8b61731/volumes" Nov 23 08:40:42 crc kubenswrapper[4681]: I1123 08:40:42.295417 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:40:42 crc kubenswrapper[4681]: I1123 08:40:42.296049 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:41:12 crc kubenswrapper[4681]: I1123 08:41:12.295314 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:41:12 crc kubenswrapper[4681]: I1123 08:41:12.295791 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.295994 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.296513 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.296554 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.296958 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.297003 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" gracePeriod=600 Nov 23 08:41:42 crc kubenswrapper[4681]: E1123 08:41:42.421055 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.981390 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" exitCode=0 Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.981431 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1"} Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.981487 4681 scope.go:117] "RemoveContainer" containerID="3d5c43d1b685f76a1e40e499cd61d16928c594dd48a6e09e738eedc2d905cbf2" Nov 23 08:41:42 crc kubenswrapper[4681]: I1123 08:41:42.982156 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:41:42 crc kubenswrapper[4681]: E1123 08:41:42.982601 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:41:56 crc kubenswrapper[4681]: I1123 08:41:56.252599 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:41:56 crc kubenswrapper[4681]: E1123 08:41:56.253171 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:42:09 crc kubenswrapper[4681]: I1123 08:42:09.252272 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:42:09 crc kubenswrapper[4681]: E1123 08:42:09.252965 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:42:21 crc kubenswrapper[4681]: I1123 08:42:21.252077 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:42:21 crc kubenswrapper[4681]: E1123 08:42:21.252813 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:42:32 crc kubenswrapper[4681]: I1123 08:42:32.251949 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:42:32 crc kubenswrapper[4681]: E1123 08:42:32.252872 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:42:45 crc kubenswrapper[4681]: I1123 08:42:45.251668 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:42:45 crc kubenswrapper[4681]: E1123 08:42:45.252577 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:43:00 crc kubenswrapper[4681]: I1123 08:43:00.251620 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:43:00 crc kubenswrapper[4681]: E1123 08:43:00.252582 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:43:15 crc kubenswrapper[4681]: I1123 08:43:15.251514 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:43:15 crc kubenswrapper[4681]: E1123 08:43:15.252026 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.269340 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d9psk"] Nov 23 08:43:16 crc kubenswrapper[4681]: E1123 08:43:16.270013 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-httpd" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.270026 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-httpd" Nov 23 08:43:16 crc kubenswrapper[4681]: E1123 08:43:16.270045 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-api" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.270050 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-api" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.270936 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-api" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.270964 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2cd6a8-e146-4a72-a522-debbf8b61731" containerName="neutron-httpd" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.272283 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.283208 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9psk"] Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.349421 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-utilities\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.349483 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-catalog-content\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.349586 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4n6\" (UniqueName: \"kubernetes.io/projected/f026cef5-ab88-4ccf-81b1-ba75db235a05-kube-api-access-bg4n6\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.451656 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-utilities\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.451715 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-catalog-content\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.451778 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4n6\" (UniqueName: \"kubernetes.io/projected/f026cef5-ab88-4ccf-81b1-ba75db235a05-kube-api-access-bg4n6\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.452371 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-utilities\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.452421 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-catalog-content\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.473009 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4n6\" (UniqueName: \"kubernetes.io/projected/f026cef5-ab88-4ccf-81b1-ba75db235a05-kube-api-access-bg4n6\") pod \"community-operators-d9psk\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:16 crc kubenswrapper[4681]: I1123 08:43:16.587967 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:17 crc kubenswrapper[4681]: I1123 08:43:17.085194 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9psk"] Nov 23 08:43:17 crc kubenswrapper[4681]: I1123 08:43:17.774251 4681 generic.go:334] "Generic (PLEG): container finished" podID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerID="d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee" exitCode=0 Nov 23 08:43:17 crc kubenswrapper[4681]: I1123 08:43:17.774297 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9psk" event={"ID":"f026cef5-ab88-4ccf-81b1-ba75db235a05","Type":"ContainerDied","Data":"d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee"} Nov 23 08:43:17 crc kubenswrapper[4681]: I1123 08:43:17.774322 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9psk" event={"ID":"f026cef5-ab88-4ccf-81b1-ba75db235a05","Type":"ContainerStarted","Data":"bfbc6abad754c1ae48b32a23dd9d0651d52777c76b3570bc8a1fb918f75a0895"} Nov 23 08:43:17 crc kubenswrapper[4681]: I1123 08:43:17.777495 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:43:18 crc kubenswrapper[4681]: I1123 08:43:18.784072 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9psk" event={"ID":"f026cef5-ab88-4ccf-81b1-ba75db235a05","Type":"ContainerStarted","Data":"91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014"} Nov 23 08:43:19 crc kubenswrapper[4681]: E1123 08:43:19.542530 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf026cef5_ab88_4ccf_81b1_ba75db235a05.slice/crio-conmon-91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:43:19 crc kubenswrapper[4681]: I1123 08:43:19.792807 4681 generic.go:334] "Generic (PLEG): container finished" podID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerID="91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014" exitCode=0 Nov 23 08:43:19 crc kubenswrapper[4681]: I1123 08:43:19.792849 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9psk" event={"ID":"f026cef5-ab88-4ccf-81b1-ba75db235a05","Type":"ContainerDied","Data":"91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014"} Nov 23 08:43:20 crc kubenswrapper[4681]: I1123 08:43:20.802780 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9psk" event={"ID":"f026cef5-ab88-4ccf-81b1-ba75db235a05","Type":"ContainerStarted","Data":"d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165"} Nov 23 08:43:20 crc kubenswrapper[4681]: I1123 08:43:20.822218 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d9psk" podStartSLOduration=2.3201125 podStartE2EDuration="4.82220201s" podCreationTimestamp="2025-11-23 08:43:16 +0000 UTC" firstStartedPulling="2025-11-23 08:43:17.776633326 +0000 UTC m=+7134.846142564" lastFinishedPulling="2025-11-23 08:43:20.278722837 +0000 UTC m=+7137.348232074" observedRunningTime="2025-11-23 08:43:20.817380547 +0000 UTC m=+7137.886889784" watchObservedRunningTime="2025-11-23 08:43:20.82220201 +0000 UTC m=+7137.891711248" Nov 23 08:43:26 crc kubenswrapper[4681]: I1123 08:43:26.588062 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:26 crc kubenswrapper[4681]: I1123 08:43:26.588698 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:26 crc kubenswrapper[4681]: I1123 08:43:26.625319 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:26 crc kubenswrapper[4681]: I1123 08:43:26.886674 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:26 crc kubenswrapper[4681]: I1123 08:43:26.928608 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d9psk"] Nov 23 08:43:28 crc kubenswrapper[4681]: I1123 08:43:28.865605 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d9psk" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="registry-server" containerID="cri-o://d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165" gracePeriod=2 Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.400190 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.494120 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-utilities\") pod \"f026cef5-ab88-4ccf-81b1-ba75db235a05\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.494154 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-catalog-content\") pod \"f026cef5-ab88-4ccf-81b1-ba75db235a05\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.494296 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg4n6\" (UniqueName: \"kubernetes.io/projected/f026cef5-ab88-4ccf-81b1-ba75db235a05-kube-api-access-bg4n6\") pod \"f026cef5-ab88-4ccf-81b1-ba75db235a05\" (UID: \"f026cef5-ab88-4ccf-81b1-ba75db235a05\") " Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.494682 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-utilities" (OuterVolumeSpecName: "utilities") pod "f026cef5-ab88-4ccf-81b1-ba75db235a05" (UID: "f026cef5-ab88-4ccf-81b1-ba75db235a05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.495062 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.500867 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f026cef5-ab88-4ccf-81b1-ba75db235a05-kube-api-access-bg4n6" (OuterVolumeSpecName: "kube-api-access-bg4n6") pod "f026cef5-ab88-4ccf-81b1-ba75db235a05" (UID: "f026cef5-ab88-4ccf-81b1-ba75db235a05"). InnerVolumeSpecName "kube-api-access-bg4n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.534327 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f026cef5-ab88-4ccf-81b1-ba75db235a05" (UID: "f026cef5-ab88-4ccf-81b1-ba75db235a05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.596523 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f026cef5-ab88-4ccf-81b1-ba75db235a05-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.596777 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg4n6\" (UniqueName: \"kubernetes.io/projected/f026cef5-ab88-4ccf-81b1-ba75db235a05-kube-api-access-bg4n6\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.875372 4681 generic.go:334] "Generic (PLEG): container finished" podID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerID="d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165" exitCode=0 Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.875421 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9psk" event={"ID":"f026cef5-ab88-4ccf-81b1-ba75db235a05","Type":"ContainerDied","Data":"d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165"} Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.875431 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9psk" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.875449 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9psk" event={"ID":"f026cef5-ab88-4ccf-81b1-ba75db235a05","Type":"ContainerDied","Data":"bfbc6abad754c1ae48b32a23dd9d0651d52777c76b3570bc8a1fb918f75a0895"} Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.875491 4681 scope.go:117] "RemoveContainer" containerID="d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.908483 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d9psk"] Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.916944 4681 scope.go:117] "RemoveContainer" containerID="91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.917295 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d9psk"] Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.944690 4681 scope.go:117] "RemoveContainer" containerID="d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.980672 4681 scope.go:117] "RemoveContainer" containerID="d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165" Nov 23 08:43:29 crc kubenswrapper[4681]: E1123 08:43:29.981943 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165\": container with ID starting with d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165 not found: ID does not exist" containerID="d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.981990 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165"} err="failed to get container status \"d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165\": rpc error: code = NotFound desc = could not find container \"d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165\": container with ID starting with d60c712beb137c0b814c0775379e0e76783a69121a44d2d83f681dbf96b6e165 not found: ID does not exist" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.982019 4681 scope.go:117] "RemoveContainer" containerID="91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014" Nov 23 08:43:29 crc kubenswrapper[4681]: E1123 08:43:29.982424 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014\": container with ID starting with 91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014 not found: ID does not exist" containerID="91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.982485 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014"} err="failed to get container status \"91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014\": rpc error: code = NotFound desc = could not find container \"91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014\": container with ID starting with 91d79f5a633dedf71de1581b4dd16dd397776a7a54c20db90868113ba4985014 not found: ID does not exist" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.982519 4681 scope.go:117] "RemoveContainer" containerID="d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee" Nov 23 08:43:29 crc kubenswrapper[4681]: E1123 08:43:29.982997 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee\": container with ID starting with d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee not found: ID does not exist" containerID="d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee" Nov 23 08:43:29 crc kubenswrapper[4681]: I1123 08:43:29.983046 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee"} err="failed to get container status \"d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee\": rpc error: code = NotFound desc = could not find container \"d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee\": container with ID starting with d291842a617fa2f6ae25f61f42b46d9ec139ecc608ddacb7ea031e8f754eedee not found: ID does not exist" Nov 23 08:43:30 crc kubenswrapper[4681]: I1123 08:43:30.252430 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:43:30 crc kubenswrapper[4681]: E1123 08:43:30.253035 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:43:31 crc kubenswrapper[4681]: I1123 08:43:31.273040 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" path="/var/lib/kubelet/pods/f026cef5-ab88-4ccf-81b1-ba75db235a05/volumes" Nov 23 08:43:43 crc kubenswrapper[4681]: I1123 08:43:43.256890 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:43:43 crc kubenswrapper[4681]: E1123 08:43:43.257740 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:43:56 crc kubenswrapper[4681]: I1123 08:43:56.252167 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:43:56 crc kubenswrapper[4681]: E1123 08:43:56.253780 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:44:09 crc kubenswrapper[4681]: I1123 08:44:09.251606 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:44:09 crc kubenswrapper[4681]: E1123 08:44:09.252205 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:44:22 crc kubenswrapper[4681]: I1123 08:44:22.251519 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:44:22 crc kubenswrapper[4681]: E1123 08:44:22.252291 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:44:36 crc kubenswrapper[4681]: I1123 08:44:36.251683 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:44:36 crc kubenswrapper[4681]: E1123 08:44:36.252374 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:44:50 crc kubenswrapper[4681]: I1123 08:44:50.252387 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:44:50 crc kubenswrapper[4681]: E1123 08:44:50.253143 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.173747 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9"] Nov 23 08:45:00 crc kubenswrapper[4681]: E1123 08:45:00.174519 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="registry-server" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.174536 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="registry-server" Nov 23 08:45:00 crc kubenswrapper[4681]: E1123 08:45:00.174572 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="extract-content" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.174578 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="extract-content" Nov 23 08:45:00 crc kubenswrapper[4681]: E1123 08:45:00.174600 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="extract-utilities" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.174606 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="extract-utilities" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.174799 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f026cef5-ab88-4ccf-81b1-ba75db235a05" containerName="registry-server" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.175925 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.183658 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.184103 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.186095 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9"] Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.231364 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-secret-volume\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.231812 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrvg\" (UniqueName: \"kubernetes.io/projected/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-kube-api-access-tvrvg\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.231917 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-config-volume\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.334837 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrvg\" (UniqueName: \"kubernetes.io/projected/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-kube-api-access-tvrvg\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.334917 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-config-volume\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.334989 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-secret-volume\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.335782 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-config-volume\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.341061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-secret-volume\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.351266 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrvg\" (UniqueName: \"kubernetes.io/projected/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-kube-api-access-tvrvg\") pod \"collect-profiles-29398125-q5jc9\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:00 crc kubenswrapper[4681]: I1123 08:45:00.492865 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:01 crc kubenswrapper[4681]: I1123 08:45:00.907055 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9"] Nov 23 08:45:01 crc kubenswrapper[4681]: I1123 08:45:01.606448 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b865e18-e31f-4b12-a4d5-71dbee6bc94a" containerID="e2e1669748959fce21ac2523fe05d3ef6b2eafb7aae677bd93666a31623c04d2" exitCode=0 Nov 23 08:45:01 crc kubenswrapper[4681]: I1123 08:45:01.606574 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" event={"ID":"9b865e18-e31f-4b12-a4d5-71dbee6bc94a","Type":"ContainerDied","Data":"e2e1669748959fce21ac2523fe05d3ef6b2eafb7aae677bd93666a31623c04d2"} Nov 23 08:45:01 crc kubenswrapper[4681]: I1123 08:45:01.606953 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" event={"ID":"9b865e18-e31f-4b12-a4d5-71dbee6bc94a","Type":"ContainerStarted","Data":"5366bb5def9018205a67eb888f07f813a55030fbab90d9fbdf32f979763117f5"} Nov 23 08:45:02 crc kubenswrapper[4681]: I1123 08:45:02.925367 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.101735 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-secret-volume\") pod \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.102234 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrvg\" (UniqueName: \"kubernetes.io/projected/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-kube-api-access-tvrvg\") pod \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.102388 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-config-volume\") pod \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\" (UID: \"9b865e18-e31f-4b12-a4d5-71dbee6bc94a\") " Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.102894 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b865e18-e31f-4b12-a4d5-71dbee6bc94a" (UID: "9b865e18-e31f-4b12-a4d5-71dbee6bc94a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.103838 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.106983 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b865e18-e31f-4b12-a4d5-71dbee6bc94a" (UID: "9b865e18-e31f-4b12-a4d5-71dbee6bc94a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.108618 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-kube-api-access-tvrvg" (OuterVolumeSpecName: "kube-api-access-tvrvg") pod "9b865e18-e31f-4b12-a4d5-71dbee6bc94a" (UID: "9b865e18-e31f-4b12-a4d5-71dbee6bc94a"). InnerVolumeSpecName "kube-api-access-tvrvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.206940 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrvg\" (UniqueName: \"kubernetes.io/projected/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-kube-api-access-tvrvg\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.206984 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b865e18-e31f-4b12-a4d5-71dbee6bc94a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.622525 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" event={"ID":"9b865e18-e31f-4b12-a4d5-71dbee6bc94a","Type":"ContainerDied","Data":"5366bb5def9018205a67eb888f07f813a55030fbab90d9fbdf32f979763117f5"} Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.622786 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5366bb5def9018205a67eb888f07f813a55030fbab90d9fbdf32f979763117f5" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.622630 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-q5jc9" Nov 23 08:45:03 crc kubenswrapper[4681]: I1123 08:45:03.993593 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc"] Nov 23 08:45:04 crc kubenswrapper[4681]: I1123 08:45:04.000358 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-z29fc"] Nov 23 08:45:04 crc kubenswrapper[4681]: I1123 08:45:04.251376 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:45:04 crc kubenswrapper[4681]: E1123 08:45:04.251957 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:45:05 crc kubenswrapper[4681]: I1123 08:45:05.260273 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301175bb-1bb6-45ec-99b2-23bd3f390cfb" path="/var/lib/kubelet/pods/301175bb-1bb6-45ec-99b2-23bd3f390cfb/volumes" Nov 23 08:45:18 crc kubenswrapper[4681]: I1123 08:45:18.252204 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:45:18 crc kubenswrapper[4681]: E1123 08:45:18.253165 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:45:30 crc kubenswrapper[4681]: I1123 08:45:30.251875 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:45:30 crc kubenswrapper[4681]: E1123 08:45:30.252764 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:45:43 crc kubenswrapper[4681]: I1123 08:45:43.260847 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:45:43 crc kubenswrapper[4681]: E1123 08:45:43.263818 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:45:48 crc kubenswrapper[4681]: I1123 08:45:48.777955 4681 scope.go:117] "RemoveContainer" containerID="e2d181b9b99e5eeef3a6c8e47fc55f3c833b215435b45520612b4b98ff58d9c4" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.529275 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ddvjm"] Nov 23 08:45:53 crc kubenswrapper[4681]: E1123 08:45:53.531042 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b865e18-e31f-4b12-a4d5-71dbee6bc94a" containerName="collect-profiles" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.531112 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b865e18-e31f-4b12-a4d5-71dbee6bc94a" containerName="collect-profiles" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.531363 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b865e18-e31f-4b12-a4d5-71dbee6bc94a" containerName="collect-profiles" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.532885 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.553849 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddvjm"] Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.569603 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-catalog-content\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.570062 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-utilities\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.570517 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/bf094f4c-8ae3-4144-b438-f01f94fe83af-kube-api-access-mstqg\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.672449 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-catalog-content\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.672620 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-utilities\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.672685 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/bf094f4c-8ae3-4144-b438-f01f94fe83af-kube-api-access-mstqg\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.673077 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-catalog-content\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.673289 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-utilities\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.696168 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/bf094f4c-8ae3-4144-b438-f01f94fe83af-kube-api-access-mstqg\") pod \"redhat-marketplace-ddvjm\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:53 crc kubenswrapper[4681]: I1123 08:45:53.851657 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:45:54 crc kubenswrapper[4681]: I1123 08:45:54.265082 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddvjm"] Nov 23 08:45:54 crc kubenswrapper[4681]: W1123 08:45:54.272555 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf094f4c_8ae3_4144_b438_f01f94fe83af.slice/crio-b763816493bcbb70492329cf3408f880183257ba1f2cd3eaae52ab591ca67472 WatchSource:0}: Error finding container b763816493bcbb70492329cf3408f880183257ba1f2cd3eaae52ab591ca67472: Status 404 returned error can't find the container with id b763816493bcbb70492329cf3408f880183257ba1f2cd3eaae52ab591ca67472 Nov 23 08:45:55 crc kubenswrapper[4681]: I1123 08:45:55.072429 4681 generic.go:334] "Generic (PLEG): container finished" podID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerID="bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440" exitCode=0 Nov 23 08:45:55 crc kubenswrapper[4681]: I1123 08:45:55.072489 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddvjm" event={"ID":"bf094f4c-8ae3-4144-b438-f01f94fe83af","Type":"ContainerDied","Data":"bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440"} Nov 23 08:45:55 crc kubenswrapper[4681]: I1123 08:45:55.072523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddvjm" event={"ID":"bf094f4c-8ae3-4144-b438-f01f94fe83af","Type":"ContainerStarted","Data":"b763816493bcbb70492329cf3408f880183257ba1f2cd3eaae52ab591ca67472"} Nov 23 08:45:55 crc kubenswrapper[4681]: I1123 08:45:55.253189 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:45:55 crc kubenswrapper[4681]: E1123 08:45:55.253806 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:45:56 crc kubenswrapper[4681]: I1123 08:45:56.083054 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddvjm" event={"ID":"bf094f4c-8ae3-4144-b438-f01f94fe83af","Type":"ContainerStarted","Data":"66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51"} Nov 23 08:45:57 crc kubenswrapper[4681]: I1123 08:45:57.091543 4681 generic.go:334] "Generic (PLEG): container finished" podID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerID="66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51" exitCode=0 Nov 23 08:45:57 crc kubenswrapper[4681]: I1123 08:45:57.091586 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddvjm" event={"ID":"bf094f4c-8ae3-4144-b438-f01f94fe83af","Type":"ContainerDied","Data":"66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51"} Nov 23 08:45:58 crc kubenswrapper[4681]: I1123 08:45:58.100557 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddvjm" event={"ID":"bf094f4c-8ae3-4144-b438-f01f94fe83af","Type":"ContainerStarted","Data":"7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3"} Nov 23 08:46:03 crc kubenswrapper[4681]: I1123 08:46:03.851927 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:46:03 crc kubenswrapper[4681]: I1123 08:46:03.852649 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:46:03 crc kubenswrapper[4681]: I1123 08:46:03.886921 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:46:03 crc kubenswrapper[4681]: I1123 08:46:03.906933 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ddvjm" podStartSLOduration=8.417698145 podStartE2EDuration="10.906916706s" podCreationTimestamp="2025-11-23 08:45:53 +0000 UTC" firstStartedPulling="2025-11-23 08:45:55.074183941 +0000 UTC m=+7292.143693168" lastFinishedPulling="2025-11-23 08:45:57.563402492 +0000 UTC m=+7294.632911729" observedRunningTime="2025-11-23 08:45:58.117864908 +0000 UTC m=+7295.187374144" watchObservedRunningTime="2025-11-23 08:46:03.906916706 +0000 UTC m=+7300.976425943" Nov 23 08:46:04 crc kubenswrapper[4681]: I1123 08:46:04.175063 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:46:04 crc kubenswrapper[4681]: I1123 08:46:04.224956 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddvjm"] Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.158863 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ddvjm" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="registry-server" containerID="cri-o://7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3" gracePeriod=2 Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.647719 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.836967 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-catalog-content\") pod \"bf094f4c-8ae3-4144-b438-f01f94fe83af\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.837007 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-utilities\") pod \"bf094f4c-8ae3-4144-b438-f01f94fe83af\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.837058 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/bf094f4c-8ae3-4144-b438-f01f94fe83af-kube-api-access-mstqg\") pod \"bf094f4c-8ae3-4144-b438-f01f94fe83af\" (UID: \"bf094f4c-8ae3-4144-b438-f01f94fe83af\") " Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.837643 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-utilities" (OuterVolumeSpecName: "utilities") pod "bf094f4c-8ae3-4144-b438-f01f94fe83af" (UID: "bf094f4c-8ae3-4144-b438-f01f94fe83af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.843054 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf094f4c-8ae3-4144-b438-f01f94fe83af-kube-api-access-mstqg" (OuterVolumeSpecName: "kube-api-access-mstqg") pod "bf094f4c-8ae3-4144-b438-f01f94fe83af" (UID: "bf094f4c-8ae3-4144-b438-f01f94fe83af"). InnerVolumeSpecName "kube-api-access-mstqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.850541 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf094f4c-8ae3-4144-b438-f01f94fe83af" (UID: "bf094f4c-8ae3-4144-b438-f01f94fe83af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.939063 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.939086 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf094f4c-8ae3-4144-b438-f01f94fe83af-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:06 crc kubenswrapper[4681]: I1123 08:46:06.939095 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/bf094f4c-8ae3-4144-b438-f01f94fe83af-kube-api-access-mstqg\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.183851 4681 generic.go:334] "Generic (PLEG): container finished" podID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerID="7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3" exitCode=0 Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.183907 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddvjm" event={"ID":"bf094f4c-8ae3-4144-b438-f01f94fe83af","Type":"ContainerDied","Data":"7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3"} Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.183954 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddvjm" event={"ID":"bf094f4c-8ae3-4144-b438-f01f94fe83af","Type":"ContainerDied","Data":"b763816493bcbb70492329cf3408f880183257ba1f2cd3eaae52ab591ca67472"} Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.183973 4681 scope.go:117] "RemoveContainer" containerID="7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.184212 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddvjm" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.225719 4681 scope.go:117] "RemoveContainer" containerID="66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.234669 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddvjm"] Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.243229 4681 scope.go:117] "RemoveContainer" containerID="bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.248103 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddvjm"] Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.262266 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" path="/var/lib/kubelet/pods/bf094f4c-8ae3-4144-b438-f01f94fe83af/volumes" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.292729 4681 scope.go:117] "RemoveContainer" containerID="7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3" Nov 23 08:46:07 crc kubenswrapper[4681]: E1123 08:46:07.293115 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3\": container with ID starting with 7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3 not found: ID does not exist" containerID="7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.293141 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3"} err="failed to get container status \"7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3\": rpc error: code = NotFound desc = could not find container \"7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3\": container with ID starting with 7b357e609bf45a1f03b9ef54da89503641cfb9e8bd6f37c85c808110307d62b3 not found: ID does not exist" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.293162 4681 scope.go:117] "RemoveContainer" containerID="66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51" Nov 23 08:46:07 crc kubenswrapper[4681]: E1123 08:46:07.293608 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51\": container with ID starting with 66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51 not found: ID does not exist" containerID="66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.293627 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51"} err="failed to get container status \"66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51\": rpc error: code = NotFound desc = could not find container \"66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51\": container with ID starting with 66f567e90a17a1188c7bcc73c8760272862d0ba2511c08aa9ee33ca7e620ee51 not found: ID does not exist" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.293639 4681 scope.go:117] "RemoveContainer" containerID="bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440" Nov 23 08:46:07 crc kubenswrapper[4681]: E1123 08:46:07.298721 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440\": container with ID starting with bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440 not found: ID does not exist" containerID="bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440" Nov 23 08:46:07 crc kubenswrapper[4681]: I1123 08:46:07.298746 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440"} err="failed to get container status \"bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440\": rpc error: code = NotFound desc = could not find container \"bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440\": container with ID starting with bcc9254cdcfdcfdfcf9dd4f1fc714891eb9de7ca6744e1f6aac0e226f8772440 not found: ID does not exist" Nov 23 08:46:09 crc kubenswrapper[4681]: I1123 08:46:09.252099 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:46:09 crc kubenswrapper[4681]: E1123 08:46:09.253606 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:46:20 crc kubenswrapper[4681]: I1123 08:46:20.252453 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:46:20 crc kubenswrapper[4681]: E1123 08:46:20.253335 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:46:34 crc kubenswrapper[4681]: I1123 08:46:34.251886 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:46:34 crc kubenswrapper[4681]: E1123 08:46:34.252790 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:46:45 crc kubenswrapper[4681]: I1123 08:46:45.251835 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:46:45 crc kubenswrapper[4681]: I1123 08:46:45.506341 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"9022dca8b7d798418088475832d237ead0878d643152726c228ab3b1d24e1197"} Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.719280 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7vhd5"] Nov 23 08:46:55 crc kubenswrapper[4681]: E1123 08:46:55.720514 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="extract-content" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.720532 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="extract-content" Nov 23 08:46:55 crc kubenswrapper[4681]: E1123 08:46:55.720569 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="extract-utilities" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.720575 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="extract-utilities" Nov 23 08:46:55 crc kubenswrapper[4681]: E1123 08:46:55.720615 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="registry-server" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.720624 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="registry-server" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.720871 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf094f4c-8ae3-4144-b438-f01f94fe83af" containerName="registry-server" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.722548 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.725540 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vhd5"] Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.825543 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2td\" (UniqueName: \"kubernetes.io/projected/520182bd-3990-44d8-8aea-6c2050411182-kube-api-access-wx2td\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.825853 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-utilities\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.826006 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-catalog-content\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.928938 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx2td\" (UniqueName: \"kubernetes.io/projected/520182bd-3990-44d8-8aea-6c2050411182-kube-api-access-wx2td\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.929007 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-utilities\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.929037 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-catalog-content\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.929583 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-catalog-content\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.929645 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-utilities\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:55 crc kubenswrapper[4681]: I1123 08:46:55.948940 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx2td\" (UniqueName: \"kubernetes.io/projected/520182bd-3990-44d8-8aea-6c2050411182-kube-api-access-wx2td\") pod \"redhat-operators-7vhd5\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:56 crc kubenswrapper[4681]: I1123 08:46:56.045704 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:46:56 crc kubenswrapper[4681]: I1123 08:46:56.584919 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vhd5"] Nov 23 08:46:57 crc kubenswrapper[4681]: I1123 08:46:57.607280 4681 generic.go:334] "Generic (PLEG): container finished" podID="520182bd-3990-44d8-8aea-6c2050411182" containerID="1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168" exitCode=0 Nov 23 08:46:57 crc kubenswrapper[4681]: I1123 08:46:57.608001 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhd5" event={"ID":"520182bd-3990-44d8-8aea-6c2050411182","Type":"ContainerDied","Data":"1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168"} Nov 23 08:46:57 crc kubenswrapper[4681]: I1123 08:46:57.608063 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhd5" event={"ID":"520182bd-3990-44d8-8aea-6c2050411182","Type":"ContainerStarted","Data":"d18f436b4884e873055fbbc075f2a4bcbff6f63c0c1905835ba275ac1340beef"} Nov 23 08:46:59 crc kubenswrapper[4681]: I1123 08:46:59.632179 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhd5" event={"ID":"520182bd-3990-44d8-8aea-6c2050411182","Type":"ContainerStarted","Data":"873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9"} Nov 23 08:47:01 crc kubenswrapper[4681]: I1123 08:47:01.654006 4681 generic.go:334] "Generic (PLEG): container finished" podID="520182bd-3990-44d8-8aea-6c2050411182" containerID="873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9" exitCode=0 Nov 23 08:47:01 crc kubenswrapper[4681]: I1123 08:47:01.654569 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhd5" event={"ID":"520182bd-3990-44d8-8aea-6c2050411182","Type":"ContainerDied","Data":"873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9"} Nov 23 08:47:02 crc kubenswrapper[4681]: I1123 08:47:02.667165 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhd5" event={"ID":"520182bd-3990-44d8-8aea-6c2050411182","Type":"ContainerStarted","Data":"dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a"} Nov 23 08:47:02 crc kubenswrapper[4681]: I1123 08:47:02.690325 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7vhd5" podStartSLOduration=3.082993498 podStartE2EDuration="7.690310874s" podCreationTimestamp="2025-11-23 08:46:55 +0000 UTC" firstStartedPulling="2025-11-23 08:46:57.611743121 +0000 UTC m=+7354.681252358" lastFinishedPulling="2025-11-23 08:47:02.219060497 +0000 UTC m=+7359.288569734" observedRunningTime="2025-11-23 08:47:02.685193864 +0000 UTC m=+7359.754703101" watchObservedRunningTime="2025-11-23 08:47:02.690310874 +0000 UTC m=+7359.759820101" Nov 23 08:47:06 crc kubenswrapper[4681]: I1123 08:47:06.046529 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:47:06 crc kubenswrapper[4681]: I1123 08:47:06.047254 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:47:07 crc kubenswrapper[4681]: I1123 08:47:07.088104 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7vhd5" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="registry-server" probeResult="failure" output=< Nov 23 08:47:07 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:47:07 crc kubenswrapper[4681]: > Nov 23 08:47:16 crc kubenswrapper[4681]: I1123 08:47:16.086329 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:47:16 crc kubenswrapper[4681]: I1123 08:47:16.129048 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:47:16 crc kubenswrapper[4681]: I1123 08:47:16.327653 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7vhd5"] Nov 23 08:47:17 crc kubenswrapper[4681]: I1123 08:47:17.801709 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7vhd5" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="registry-server" containerID="cri-o://dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a" gracePeriod=2 Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.392152 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.401102 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-utilities\") pod \"520182bd-3990-44d8-8aea-6c2050411182\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.401254 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-catalog-content\") pod \"520182bd-3990-44d8-8aea-6c2050411182\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.401453 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx2td\" (UniqueName: \"kubernetes.io/projected/520182bd-3990-44d8-8aea-6c2050411182-kube-api-access-wx2td\") pod \"520182bd-3990-44d8-8aea-6c2050411182\" (UID: \"520182bd-3990-44d8-8aea-6c2050411182\") " Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.402523 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-utilities" (OuterVolumeSpecName: "utilities") pod "520182bd-3990-44d8-8aea-6c2050411182" (UID: "520182bd-3990-44d8-8aea-6c2050411182"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.411579 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520182bd-3990-44d8-8aea-6c2050411182-kube-api-access-wx2td" (OuterVolumeSpecName: "kube-api-access-wx2td") pod "520182bd-3990-44d8-8aea-6c2050411182" (UID: "520182bd-3990-44d8-8aea-6c2050411182"). InnerVolumeSpecName "kube-api-access-wx2td". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.482276 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "520182bd-3990-44d8-8aea-6c2050411182" (UID: "520182bd-3990-44d8-8aea-6c2050411182"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.507695 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.507724 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx2td\" (UniqueName: \"kubernetes.io/projected/520182bd-3990-44d8-8aea-6c2050411182-kube-api-access-wx2td\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.507737 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520182bd-3990-44d8-8aea-6c2050411182-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.812193 4681 generic.go:334] "Generic (PLEG): container finished" podID="520182bd-3990-44d8-8aea-6c2050411182" containerID="dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a" exitCode=0 Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.812236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhd5" event={"ID":"520182bd-3990-44d8-8aea-6c2050411182","Type":"ContainerDied","Data":"dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a"} Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.812265 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhd5" event={"ID":"520182bd-3990-44d8-8aea-6c2050411182","Type":"ContainerDied","Data":"d18f436b4884e873055fbbc075f2a4bcbff6f63c0c1905835ba275ac1340beef"} Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.812280 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhd5" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.812655 4681 scope.go:117] "RemoveContainer" containerID="dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.836791 4681 scope.go:117] "RemoveContainer" containerID="873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.839197 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7vhd5"] Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.845658 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7vhd5"] Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.857234 4681 scope.go:117] "RemoveContainer" containerID="1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.903310 4681 scope.go:117] "RemoveContainer" containerID="dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a" Nov 23 08:47:18 crc kubenswrapper[4681]: E1123 08:47:18.903903 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a\": container with ID starting with dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a not found: ID does not exist" containerID="dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.903942 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a"} err="failed to get container status \"dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a\": rpc error: code = NotFound desc = could not find container \"dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a\": container with ID starting with dad66fd41dd22b3de4178b4a9199f167dbfcc8c796383c2709b61d4939e60f2a not found: ID does not exist" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.903965 4681 scope.go:117] "RemoveContainer" containerID="873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9" Nov 23 08:47:18 crc kubenswrapper[4681]: E1123 08:47:18.904289 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9\": container with ID starting with 873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9 not found: ID does not exist" containerID="873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.904317 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9"} err="failed to get container status \"873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9\": rpc error: code = NotFound desc = could not find container \"873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9\": container with ID starting with 873637c2a7b0719c419048b6fde9d4894fc91ad7818281b98782e4009649a2f9 not found: ID does not exist" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.904337 4681 scope.go:117] "RemoveContainer" containerID="1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168" Nov 23 08:47:18 crc kubenswrapper[4681]: E1123 08:47:18.904585 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168\": container with ID starting with 1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168 not found: ID does not exist" containerID="1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168" Nov 23 08:47:18 crc kubenswrapper[4681]: I1123 08:47:18.904609 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168"} err="failed to get container status \"1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168\": rpc error: code = NotFound desc = could not find container \"1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168\": container with ID starting with 1bf81a98caa15458721fcd793e1c38e0c7e837d388ef27ebb1d008596df3e168 not found: ID does not exist" Nov 23 08:47:19 crc kubenswrapper[4681]: I1123 08:47:19.262527 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="520182bd-3990-44d8-8aea-6c2050411182" path="/var/lib/kubelet/pods/520182bd-3990-44d8-8aea-6c2050411182/volumes" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.014276 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bmbpg"] Nov 23 08:48:04 crc kubenswrapper[4681]: E1123 08:48:04.015323 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="registry-server" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.015335 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="registry-server" Nov 23 08:48:04 crc kubenswrapper[4681]: E1123 08:48:04.015349 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="extract-utilities" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.015354 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="extract-utilities" Nov 23 08:48:04 crc kubenswrapper[4681]: E1123 08:48:04.015375 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="extract-content" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.015381 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="extract-content" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.015568 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="520182bd-3990-44d8-8aea-6c2050411182" containerName="registry-server" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.016862 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.028071 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmbpg"] Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.117891 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmz6\" (UniqueName: \"kubernetes.io/projected/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-kube-api-access-2hmz6\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.117941 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-catalog-content\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.117992 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-utilities\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.221330 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmz6\" (UniqueName: \"kubernetes.io/projected/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-kube-api-access-2hmz6\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.221747 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-catalog-content\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.222230 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-catalog-content\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.223548 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-utilities\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.223885 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-utilities\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.241785 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmz6\" (UniqueName: \"kubernetes.io/projected/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-kube-api-access-2hmz6\") pod \"certified-operators-bmbpg\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.332228 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:04 crc kubenswrapper[4681]: I1123 08:48:04.809719 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmbpg"] Nov 23 08:48:05 crc kubenswrapper[4681]: I1123 08:48:05.208750 4681 generic.go:334] "Generic (PLEG): container finished" podID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerID="32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86" exitCode=0 Nov 23 08:48:05 crc kubenswrapper[4681]: I1123 08:48:05.208832 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmbpg" event={"ID":"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f","Type":"ContainerDied","Data":"32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86"} Nov 23 08:48:05 crc kubenswrapper[4681]: I1123 08:48:05.208998 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmbpg" event={"ID":"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f","Type":"ContainerStarted","Data":"e8c43aad4f180f0120d36b1430560cf6eb64aa8bcff3023fd36f6e842f000b98"} Nov 23 08:48:06 crc kubenswrapper[4681]: I1123 08:48:06.224138 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmbpg" event={"ID":"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f","Type":"ContainerStarted","Data":"e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf"} Nov 23 08:48:07 crc kubenswrapper[4681]: I1123 08:48:07.238281 4681 generic.go:334] "Generic (PLEG): container finished" podID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerID="e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf" exitCode=0 Nov 23 08:48:07 crc kubenswrapper[4681]: I1123 08:48:07.238610 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmbpg" event={"ID":"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f","Type":"ContainerDied","Data":"e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf"} Nov 23 08:48:08 crc kubenswrapper[4681]: I1123 08:48:08.254310 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmbpg" event={"ID":"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f","Type":"ContainerStarted","Data":"9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9"} Nov 23 08:48:08 crc kubenswrapper[4681]: I1123 08:48:08.282614 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bmbpg" podStartSLOduration=2.782287727 podStartE2EDuration="5.282593426s" podCreationTimestamp="2025-11-23 08:48:03 +0000 UTC" firstStartedPulling="2025-11-23 08:48:05.210773156 +0000 UTC m=+7422.280282393" lastFinishedPulling="2025-11-23 08:48:07.711078855 +0000 UTC m=+7424.780588092" observedRunningTime="2025-11-23 08:48:08.268716102 +0000 UTC m=+7425.338225339" watchObservedRunningTime="2025-11-23 08:48:08.282593426 +0000 UTC m=+7425.352102663" Nov 23 08:48:14 crc kubenswrapper[4681]: I1123 08:48:14.333050 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:14 crc kubenswrapper[4681]: I1123 08:48:14.333663 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:14 crc kubenswrapper[4681]: I1123 08:48:14.370337 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:15 crc kubenswrapper[4681]: I1123 08:48:15.357792 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:15 crc kubenswrapper[4681]: I1123 08:48:15.397349 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmbpg"] Nov 23 08:48:17 crc kubenswrapper[4681]: I1123 08:48:17.335674 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bmbpg" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="registry-server" containerID="cri-o://9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9" gracePeriod=2 Nov 23 08:48:17 crc kubenswrapper[4681]: I1123 08:48:17.841894 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.013777 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hmz6\" (UniqueName: \"kubernetes.io/projected/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-kube-api-access-2hmz6\") pod \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.014009 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-utilities\") pod \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.014040 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-catalog-content\") pod \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\" (UID: \"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f\") " Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.014771 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-utilities" (OuterVolumeSpecName: "utilities") pod "c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" (UID: "c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.021656 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-kube-api-access-2hmz6" (OuterVolumeSpecName: "kube-api-access-2hmz6") pod "c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" (UID: "c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f"). InnerVolumeSpecName "kube-api-access-2hmz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.062018 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" (UID: "c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.116740 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.116775 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.116793 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hmz6\" (UniqueName: \"kubernetes.io/projected/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f-kube-api-access-2hmz6\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.346385 4681 generic.go:334] "Generic (PLEG): container finished" podID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerID="9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9" exitCode=0 Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.346436 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmbpg" event={"ID":"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f","Type":"ContainerDied","Data":"9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9"} Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.346488 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmbpg" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.346507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmbpg" event={"ID":"c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f","Type":"ContainerDied","Data":"e8c43aad4f180f0120d36b1430560cf6eb64aa8bcff3023fd36f6e842f000b98"} Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.346529 4681 scope.go:117] "RemoveContainer" containerID="9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.365745 4681 scope.go:117] "RemoveContainer" containerID="e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.392788 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmbpg"] Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.402443 4681 scope.go:117] "RemoveContainer" containerID="32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.403107 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bmbpg"] Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.422826 4681 scope.go:117] "RemoveContainer" containerID="9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9" Nov 23 08:48:18 crc kubenswrapper[4681]: E1123 08:48:18.423121 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9\": container with ID starting with 9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9 not found: ID does not exist" containerID="9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.423160 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9"} err="failed to get container status \"9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9\": rpc error: code = NotFound desc = could not find container \"9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9\": container with ID starting with 9f248c74537f375428f1603fae7afcebc5e1a9967cc4641b7a3b77a459ffd7f9 not found: ID does not exist" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.423188 4681 scope.go:117] "RemoveContainer" containerID="e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf" Nov 23 08:48:18 crc kubenswrapper[4681]: E1123 08:48:18.423414 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf\": container with ID starting with e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf not found: ID does not exist" containerID="e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.423446 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf"} err="failed to get container status \"e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf\": rpc error: code = NotFound desc = could not find container \"e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf\": container with ID starting with e02d1b5a8ee59e5947031288a845f96f5b5e1cdc00329e53824dd9a22484cacf not found: ID does not exist" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.423489 4681 scope.go:117] "RemoveContainer" containerID="32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86" Nov 23 08:48:18 crc kubenswrapper[4681]: E1123 08:48:18.423761 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86\": container with ID starting with 32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86 not found: ID does not exist" containerID="32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86" Nov 23 08:48:18 crc kubenswrapper[4681]: I1123 08:48:18.423793 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86"} err="failed to get container status \"32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86\": rpc error: code = NotFound desc = could not find container \"32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86\": container with ID starting with 32dad796976a585f52823ba74038933bf344e7a6fe6f32d1ca08919cba5b0e86 not found: ID does not exist" Nov 23 08:48:19 crc kubenswrapper[4681]: I1123 08:48:19.262029 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" path="/var/lib/kubelet/pods/c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f/volumes" Nov 23 08:49:12 crc kubenswrapper[4681]: I1123 08:49:12.295247 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:49:12 crc kubenswrapper[4681]: I1123 08:49:12.296147 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:49:42 crc kubenswrapper[4681]: I1123 08:49:42.295587 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:49:42 crc kubenswrapper[4681]: I1123 08:49:42.295998 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:50:12 crc kubenswrapper[4681]: I1123 08:50:12.296500 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:50:12 crc kubenswrapper[4681]: I1123 08:50:12.296882 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:50:12 crc kubenswrapper[4681]: I1123 08:50:12.296915 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:50:12 crc kubenswrapper[4681]: I1123 08:50:12.297312 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9022dca8b7d798418088475832d237ead0878d643152726c228ab3b1d24e1197"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:50:12 crc kubenswrapper[4681]: I1123 08:50:12.297373 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://9022dca8b7d798418088475832d237ead0878d643152726c228ab3b1d24e1197" gracePeriod=600 Nov 23 08:50:13 crc kubenswrapper[4681]: I1123 08:50:13.293424 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="9022dca8b7d798418088475832d237ead0878d643152726c228ab3b1d24e1197" exitCode=0 Nov 23 08:50:13 crc kubenswrapper[4681]: I1123 08:50:13.293516 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"9022dca8b7d798418088475832d237ead0878d643152726c228ab3b1d24e1197"} Nov 23 08:50:13 crc kubenswrapper[4681]: I1123 08:50:13.293869 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90"} Nov 23 08:50:13 crc kubenswrapper[4681]: I1123 08:50:13.293891 4681 scope.go:117] "RemoveContainer" containerID="d7798051d6d66026d4ff58045065aef57e15174285da5402190d39dcbab9b6d1" Nov 23 08:52:12 crc kubenswrapper[4681]: I1123 08:52:12.296820 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:52:12 crc kubenswrapper[4681]: I1123 08:52:12.297228 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:52:42 crc kubenswrapper[4681]: I1123 08:52:42.299605 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:52:42 crc kubenswrapper[4681]: I1123 08:52:42.300222 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.295606 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.296304 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.296366 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.297545 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90"} pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.297605 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" containerID="cri-o://242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" gracePeriod=600 Nov 23 08:53:12 crc kubenswrapper[4681]: E1123 08:53:12.418347 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.824990 4681 generic.go:334] "Generic (PLEG): container finished" podID="539dc58c-e752-43c8-bdef-af87528b76f3" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" exitCode=0 Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.825059 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerDied","Data":"242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90"} Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.825100 4681 scope.go:117] "RemoveContainer" containerID="9022dca8b7d798418088475832d237ead0878d643152726c228ab3b1d24e1197" Nov 23 08:53:12 crc kubenswrapper[4681]: I1123 08:53:12.826208 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:53:12 crc kubenswrapper[4681]: E1123 08:53:12.828564 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:53:24 crc kubenswrapper[4681]: I1123 08:53:24.251979 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:53:24 crc kubenswrapper[4681]: E1123 08:53:24.252914 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:53:39 crc kubenswrapper[4681]: I1123 08:53:39.253067 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:53:39 crc kubenswrapper[4681]: E1123 08:53:39.254154 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:53:54 crc kubenswrapper[4681]: I1123 08:53:54.252906 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:53:54 crc kubenswrapper[4681]: E1123 08:53:54.253665 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:54:09 crc kubenswrapper[4681]: I1123 08:54:09.251450 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:54:09 crc kubenswrapper[4681]: E1123 08:54:09.252430 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:54:20 crc kubenswrapper[4681]: I1123 08:54:20.253303 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:54:20 crc kubenswrapper[4681]: E1123 08:54:20.254282 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:54:31 crc kubenswrapper[4681]: I1123 08:54:31.252807 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:54:31 crc kubenswrapper[4681]: E1123 08:54:31.253572 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:54:42 crc kubenswrapper[4681]: I1123 08:54:42.252870 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:54:42 crc kubenswrapper[4681]: E1123 08:54:42.253772 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:54:57 crc kubenswrapper[4681]: I1123 08:54:57.252719 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:54:57 crc kubenswrapper[4681]: E1123 08:54:57.253643 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:55:12 crc kubenswrapper[4681]: I1123 08:55:12.252629 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:55:12 crc kubenswrapper[4681]: E1123 08:55:12.253536 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:55:27 crc kubenswrapper[4681]: I1123 08:55:27.252358 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:55:27 crc kubenswrapper[4681]: E1123 08:55:27.253189 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:55:39 crc kubenswrapper[4681]: I1123 08:55:39.252333 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:55:39 crc kubenswrapper[4681]: E1123 08:55:39.253301 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:55:53 crc kubenswrapper[4681]: I1123 08:55:53.258434 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:55:53 crc kubenswrapper[4681]: E1123 08:55:53.259306 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:56:07 crc kubenswrapper[4681]: I1123 08:56:07.252265 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:56:07 crc kubenswrapper[4681]: E1123 08:56:07.253095 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:56:19 crc kubenswrapper[4681]: I1123 08:56:19.252289 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:56:19 crc kubenswrapper[4681]: E1123 08:56:19.253329 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:56:31 crc kubenswrapper[4681]: I1123 08:56:31.252011 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:56:31 crc kubenswrapper[4681]: E1123 08:56:31.252600 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:56:34 crc kubenswrapper[4681]: I1123 08:56:34.907933 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-75b4b57dcf-bqmc5" podUID="91ec0b0d-3fb3-4710-8be4-acb8bb895d42" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 23 08:56:46 crc kubenswrapper[4681]: I1123 08:56:46.252322 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:56:46 crc kubenswrapper[4681]: E1123 08:56:46.253376 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.837381 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9pd8w"] Nov 23 08:56:52 crc kubenswrapper[4681]: E1123 08:56:52.843993 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="extract-utilities" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.844027 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="extract-utilities" Nov 23 08:56:52 crc kubenswrapper[4681]: E1123 08:56:52.844048 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="extract-content" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.844057 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="extract-content" Nov 23 08:56:52 crc kubenswrapper[4681]: E1123 08:56:52.844077 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="registry-server" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.844083 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="registry-server" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.844854 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24ff8e4-a8fe-4a68-aeb8-934c11b3f47f" containerName="registry-server" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.849102 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.862319 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pd8w"] Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.921378 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/9b2c2ae0-9f81-4722-9c37-086c5650c50b-kube-api-access-mfnzr\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.921977 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-utilities\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:52 crc kubenswrapper[4681]: I1123 08:56:52.922270 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-catalog-content\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.023258 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/9b2c2ae0-9f81-4722-9c37-086c5650c50b-kube-api-access-mfnzr\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.023355 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-utilities\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.023408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-catalog-content\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.026535 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-catalog-content\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.026617 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-utilities\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.059974 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/9b2c2ae0-9f81-4722-9c37-086c5650c50b-kube-api-access-mfnzr\") pod \"redhat-marketplace-9pd8w\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.172847 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:56:53 crc kubenswrapper[4681]: I1123 08:56:53.995296 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pd8w"] Nov 23 08:56:54 crc kubenswrapper[4681]: I1123 08:56:54.819080 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerID="4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd" exitCode=0 Nov 23 08:56:54 crc kubenswrapper[4681]: I1123 08:56:54.819183 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pd8w" event={"ID":"9b2c2ae0-9f81-4722-9c37-086c5650c50b","Type":"ContainerDied","Data":"4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd"} Nov 23 08:56:54 crc kubenswrapper[4681]: I1123 08:56:54.819561 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pd8w" event={"ID":"9b2c2ae0-9f81-4722-9c37-086c5650c50b","Type":"ContainerStarted","Data":"02004183ae4dcbc3fafc22222622a79d3dd7f9219425141bfef4ad793376b318"} Nov 23 08:56:54 crc kubenswrapper[4681]: I1123 08:56:54.822668 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:56:55 crc kubenswrapper[4681]: I1123 08:56:55.835504 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pd8w" event={"ID":"9b2c2ae0-9f81-4722-9c37-086c5650c50b","Type":"ContainerStarted","Data":"e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc"} Nov 23 08:56:56 crc kubenswrapper[4681]: I1123 08:56:56.852542 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerID="e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc" exitCode=0 Nov 23 08:56:56 crc kubenswrapper[4681]: I1123 08:56:56.852654 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pd8w" event={"ID":"9b2c2ae0-9f81-4722-9c37-086c5650c50b","Type":"ContainerDied","Data":"e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc"} Nov 23 08:56:57 crc kubenswrapper[4681]: I1123 08:56:57.866129 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pd8w" event={"ID":"9b2c2ae0-9f81-4722-9c37-086c5650c50b","Type":"ContainerStarted","Data":"97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba"} Nov 23 08:56:57 crc kubenswrapper[4681]: I1123 08:56:57.893514 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9pd8w" podStartSLOduration=3.256199214 podStartE2EDuration="5.892568995s" podCreationTimestamp="2025-11-23 08:56:52 +0000 UTC" firstStartedPulling="2025-11-23 08:56:54.821293104 +0000 UTC m=+7951.890802342" lastFinishedPulling="2025-11-23 08:56:57.457662886 +0000 UTC m=+7954.527172123" observedRunningTime="2025-11-23 08:56:57.882785732 +0000 UTC m=+7954.952294969" watchObservedRunningTime="2025-11-23 08:56:57.892568995 +0000 UTC m=+7954.962078232" Nov 23 08:57:00 crc kubenswrapper[4681]: I1123 08:57:00.252854 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:57:00 crc kubenswrapper[4681]: E1123 08:57:00.253581 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:57:03 crc kubenswrapper[4681]: I1123 08:57:03.174519 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:57:03 crc kubenswrapper[4681]: I1123 08:57:03.174588 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:57:03 crc kubenswrapper[4681]: I1123 08:57:03.217343 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:57:03 crc kubenswrapper[4681]: I1123 08:57:03.973775 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:57:04 crc kubenswrapper[4681]: I1123 08:57:04.056784 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pd8w"] Nov 23 08:57:05 crc kubenswrapper[4681]: I1123 08:57:05.948955 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9pd8w" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="registry-server" containerID="cri-o://97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba" gracePeriod=2 Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.533266 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.667474 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-utilities\") pod \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.667856 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-catalog-content\") pod \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.668074 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/9b2c2ae0-9f81-4722-9c37-086c5650c50b-kube-api-access-mfnzr\") pod \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\" (UID: \"9b2c2ae0-9f81-4722-9c37-086c5650c50b\") " Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.668628 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-utilities" (OuterVolumeSpecName: "utilities") pod "9b2c2ae0-9f81-4722-9c37-086c5650c50b" (UID: "9b2c2ae0-9f81-4722-9c37-086c5650c50b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.678252 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2c2ae0-9f81-4722-9c37-086c5650c50b-kube-api-access-mfnzr" (OuterVolumeSpecName: "kube-api-access-mfnzr") pod "9b2c2ae0-9f81-4722-9c37-086c5650c50b" (UID: "9b2c2ae0-9f81-4722-9c37-086c5650c50b"). InnerVolumeSpecName "kube-api-access-mfnzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.682788 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b2c2ae0-9f81-4722-9c37-086c5650c50b" (UID: "9b2c2ae0-9f81-4722-9c37-086c5650c50b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.771965 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.771997 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2c2ae0-9f81-4722-9c37-086c5650c50b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.772012 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/9b2c2ae0-9f81-4722-9c37-086c5650c50b-kube-api-access-mfnzr\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.962100 4681 generic.go:334] "Generic (PLEG): container finished" podID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerID="97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba" exitCode=0 Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.962152 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pd8w" event={"ID":"9b2c2ae0-9f81-4722-9c37-086c5650c50b","Type":"ContainerDied","Data":"97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba"} Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.962187 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pd8w" event={"ID":"9b2c2ae0-9f81-4722-9c37-086c5650c50b","Type":"ContainerDied","Data":"02004183ae4dcbc3fafc22222622a79d3dd7f9219425141bfef4ad793376b318"} Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.962208 4681 scope.go:117] "RemoveContainer" containerID="97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.962323 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pd8w" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.989658 4681 scope.go:117] "RemoveContainer" containerID="e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc" Nov 23 08:57:06 crc kubenswrapper[4681]: I1123 08:57:06.995955 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pd8w"] Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.002198 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pd8w"] Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.016580 4681 scope.go:117] "RemoveContainer" containerID="4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd" Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.047815 4681 scope.go:117] "RemoveContainer" containerID="97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba" Nov 23 08:57:07 crc kubenswrapper[4681]: E1123 08:57:07.048392 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba\": container with ID starting with 97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba not found: ID does not exist" containerID="97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba" Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.048432 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba"} err="failed to get container status \"97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba\": rpc error: code = NotFound desc = could not find container \"97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba\": container with ID starting with 97eec6817e4e3a8a02d6b7e07eb68e0ab28446cde4b8fe7d9ff62234b74051ba not found: ID does not exist" Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.048502 4681 scope.go:117] "RemoveContainer" containerID="e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc" Nov 23 08:57:07 crc kubenswrapper[4681]: E1123 08:57:07.048754 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc\": container with ID starting with e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc not found: ID does not exist" containerID="e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc" Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.048782 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc"} err="failed to get container status \"e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc\": rpc error: code = NotFound desc = could not find container \"e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc\": container with ID starting with e7cf80fd50d0e92adc2edf4f43ef751348c712a5651b7181c4d2220ecb87b0fc not found: ID does not exist" Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.048801 4681 scope.go:117] "RemoveContainer" containerID="4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd" Nov 23 08:57:07 crc kubenswrapper[4681]: E1123 08:57:07.049148 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd\": container with ID starting with 4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd not found: ID does not exist" containerID="4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd" Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.049173 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd"} err="failed to get container status \"4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd\": rpc error: code = NotFound desc = could not find container \"4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd\": container with ID starting with 4aa024782e7e903f7d5567f4b943c9b009786f6c08044176bdfcafd67884b5fd not found: ID does not exist" Nov 23 08:57:07 crc kubenswrapper[4681]: I1123 08:57:07.262381 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" path="/var/lib/kubelet/pods/9b2c2ae0-9f81-4722-9c37-086c5650c50b/volumes" Nov 23 08:57:13 crc kubenswrapper[4681]: I1123 08:57:13.273389 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:57:13 crc kubenswrapper[4681]: E1123 08:57:13.275400 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:57:26 crc kubenswrapper[4681]: I1123 08:57:26.252707 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:57:26 crc kubenswrapper[4681]: E1123 08:57:26.253719 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:57:41 crc kubenswrapper[4681]: I1123 08:57:41.251739 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:57:41 crc kubenswrapper[4681]: E1123 08:57:41.252616 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:57:53 crc kubenswrapper[4681]: I1123 08:57:53.258247 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:57:53 crc kubenswrapper[4681]: E1123 08:57:53.258953 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:58:08 crc kubenswrapper[4681]: I1123 08:58:08.251607 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:58:08 crc kubenswrapper[4681]: E1123 08:58:08.252428 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wh4gt_openshift-machine-config-operator(539dc58c-e752-43c8-bdef-af87528b76f3)\"" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" Nov 23 08:58:20 crc kubenswrapper[4681]: I1123 08:58:20.252029 4681 scope.go:117] "RemoveContainer" containerID="242fd53f2116708cfc71c2bb5b4eb1e469be7484d3d04afc304838e27ee8db90" Nov 23 08:58:20 crc kubenswrapper[4681]: I1123 08:58:20.637814 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" event={"ID":"539dc58c-e752-43c8-bdef-af87528b76f3","Type":"ContainerStarted","Data":"fbc4385ddd34f77d196c287e8ca9a1092a53b495739864eaa43a9f59d2ccda1c"} Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.279382 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m5vjx"] Nov 23 08:58:54 crc kubenswrapper[4681]: E1123 08:58:54.280450 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="extract-content" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.280480 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="extract-content" Nov 23 08:58:54 crc kubenswrapper[4681]: E1123 08:58:54.280492 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="registry-server" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.280498 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="registry-server" Nov 23 08:58:54 crc kubenswrapper[4681]: E1123 08:58:54.280516 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="extract-utilities" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.280521 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="extract-utilities" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.280736 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2c2ae0-9f81-4722-9c37-086c5650c50b" containerName="registry-server" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.282098 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.292214 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m5vjx"] Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.339554 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-utilities\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.339785 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmlz7\" (UniqueName: \"kubernetes.io/projected/35e06887-a3a7-471b-a072-d55a0bfbca74-kube-api-access-dmlz7\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.340212 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-catalog-content\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.442399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-catalog-content\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.442762 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-utilities\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.442875 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-catalog-content\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.442882 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmlz7\" (UniqueName: \"kubernetes.io/projected/35e06887-a3a7-471b-a072-d55a0bfbca74-kube-api-access-dmlz7\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.443268 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-utilities\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.464173 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmlz7\" (UniqueName: \"kubernetes.io/projected/35e06887-a3a7-471b-a072-d55a0bfbca74-kube-api-access-dmlz7\") pod \"redhat-operators-m5vjx\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.482177 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fcd88"] Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.491530 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.495835 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fcd88"] Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.545023 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-utilities\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.545182 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2jg\" (UniqueName: \"kubernetes.io/projected/983bf4c1-2076-491d-b173-8b0869bba32a-kube-api-access-lv2jg\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.545350 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-catalog-content\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.601178 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.646965 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-utilities\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.647237 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2jg\" (UniqueName: \"kubernetes.io/projected/983bf4c1-2076-491d-b173-8b0869bba32a-kube-api-access-lv2jg\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.647334 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-catalog-content\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.647425 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-utilities\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.647724 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-catalog-content\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.665982 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2jg\" (UniqueName: \"kubernetes.io/projected/983bf4c1-2076-491d-b173-8b0869bba32a-kube-api-access-lv2jg\") pod \"certified-operators-fcd88\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:54 crc kubenswrapper[4681]: I1123 08:58:54.815933 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.247353 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m5vjx"] Nov 23 08:58:55 crc kubenswrapper[4681]: W1123 08:58:55.261436 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35e06887_a3a7_471b_a072_d55a0bfbca74.slice/crio-1469e5a50473a82c7b2258758d362c52ee7ae53980523f506007dec8b3b3a780 WatchSource:0}: Error finding container 1469e5a50473a82c7b2258758d362c52ee7ae53980523f506007dec8b3b3a780: Status 404 returned error can't find the container with id 1469e5a50473a82c7b2258758d362c52ee7ae53980523f506007dec8b3b3a780 Nov 23 08:58:55 crc kubenswrapper[4681]: W1123 08:58:55.382984 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod983bf4c1_2076_491d_b173_8b0869bba32a.slice/crio-06ea67064ad4eca9e4869eaba4cac054477a4e1e8768bf4c86e24bf5020d7f0c WatchSource:0}: Error finding container 06ea67064ad4eca9e4869eaba4cac054477a4e1e8768bf4c86e24bf5020d7f0c: Status 404 returned error can't find the container with id 06ea67064ad4eca9e4869eaba4cac054477a4e1e8768bf4c86e24bf5020d7f0c Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.383999 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fcd88"] Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.951224 4681 generic.go:334] "Generic (PLEG): container finished" podID="983bf4c1-2076-491d-b173-8b0869bba32a" containerID="fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f" exitCode=0 Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.951321 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcd88" event={"ID":"983bf4c1-2076-491d-b173-8b0869bba32a","Type":"ContainerDied","Data":"fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f"} Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.951714 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcd88" event={"ID":"983bf4c1-2076-491d-b173-8b0869bba32a","Type":"ContainerStarted","Data":"06ea67064ad4eca9e4869eaba4cac054477a4e1e8768bf4c86e24bf5020d7f0c"} Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.954411 4681 generic.go:334] "Generic (PLEG): container finished" podID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerID="f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322" exitCode=0 Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.954521 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vjx" event={"ID":"35e06887-a3a7-471b-a072-d55a0bfbca74","Type":"ContainerDied","Data":"f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322"} Nov 23 08:58:55 crc kubenswrapper[4681]: I1123 08:58:55.954602 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vjx" event={"ID":"35e06887-a3a7-471b-a072-d55a0bfbca74","Type":"ContainerStarted","Data":"1469e5a50473a82c7b2258758d362c52ee7ae53980523f506007dec8b3b3a780"} Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.901645 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ksfv2"] Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.906677 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.932428 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksfv2"] Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.962173 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7txsj\" (UniqueName: \"kubernetes.io/projected/59c3f833-7952-4074-a531-15e4bd1a3966-kube-api-access-7txsj\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.969301 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-utilities\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.969607 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-catalog-content\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.977749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcd88" event={"ID":"983bf4c1-2076-491d-b173-8b0869bba32a","Type":"ContainerStarted","Data":"62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a"} Nov 23 08:58:56 crc kubenswrapper[4681]: I1123 08:58:56.981278 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vjx" event={"ID":"35e06887-a3a7-471b-a072-d55a0bfbca74","Type":"ContainerStarted","Data":"58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0"} Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.077995 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-utilities\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.078125 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-catalog-content\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.078296 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7txsj\" (UniqueName: \"kubernetes.io/projected/59c3f833-7952-4074-a531-15e4bd1a3966-kube-api-access-7txsj\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.078714 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-utilities\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.078927 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-catalog-content\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.102498 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7txsj\" (UniqueName: \"kubernetes.io/projected/59c3f833-7952-4074-a531-15e4bd1a3966-kube-api-access-7txsj\") pod \"community-operators-ksfv2\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.223294 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.804960 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksfv2"] Nov 23 08:58:57 crc kubenswrapper[4681]: W1123 08:58:57.810698 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c3f833_7952_4074_a531_15e4bd1a3966.slice/crio-7924ec0e49a625142d880427c97678be50cc1481ba52b51a7362c2c7cf92d796 WatchSource:0}: Error finding container 7924ec0e49a625142d880427c97678be50cc1481ba52b51a7362c2c7cf92d796: Status 404 returned error can't find the container with id 7924ec0e49a625142d880427c97678be50cc1481ba52b51a7362c2c7cf92d796 Nov 23 08:58:57 crc kubenswrapper[4681]: I1123 08:58:57.990210 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksfv2" event={"ID":"59c3f833-7952-4074-a531-15e4bd1a3966","Type":"ContainerStarted","Data":"7924ec0e49a625142d880427c97678be50cc1481ba52b51a7362c2c7cf92d796"} Nov 23 08:58:59 crc kubenswrapper[4681]: I1123 08:58:59.000195 4681 generic.go:334] "Generic (PLEG): container finished" podID="59c3f833-7952-4074-a531-15e4bd1a3966" containerID="c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4" exitCode=0 Nov 23 08:58:59 crc kubenswrapper[4681]: I1123 08:58:59.001160 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksfv2" event={"ID":"59c3f833-7952-4074-a531-15e4bd1a3966","Type":"ContainerDied","Data":"c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4"} Nov 23 08:58:59 crc kubenswrapper[4681]: I1123 08:58:59.002577 4681 generic.go:334] "Generic (PLEG): container finished" podID="983bf4c1-2076-491d-b173-8b0869bba32a" containerID="62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a" exitCode=0 Nov 23 08:58:59 crc kubenswrapper[4681]: I1123 08:58:59.002614 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcd88" event={"ID":"983bf4c1-2076-491d-b173-8b0869bba32a","Type":"ContainerDied","Data":"62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a"} Nov 23 08:59:00 crc kubenswrapper[4681]: I1123 08:59:00.014120 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcd88" event={"ID":"983bf4c1-2076-491d-b173-8b0869bba32a","Type":"ContainerStarted","Data":"1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8"} Nov 23 08:59:00 crc kubenswrapper[4681]: I1123 08:59:00.018451 4681 generic.go:334] "Generic (PLEG): container finished" podID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerID="58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0" exitCode=0 Nov 23 08:59:00 crc kubenswrapper[4681]: I1123 08:59:00.018523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vjx" event={"ID":"35e06887-a3a7-471b-a072-d55a0bfbca74","Type":"ContainerDied","Data":"58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0"} Nov 23 08:59:00 crc kubenswrapper[4681]: I1123 08:59:00.023781 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksfv2" event={"ID":"59c3f833-7952-4074-a531-15e4bd1a3966","Type":"ContainerStarted","Data":"e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f"} Nov 23 08:59:00 crc kubenswrapper[4681]: I1123 08:59:00.043331 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fcd88" podStartSLOduration=2.505249485 podStartE2EDuration="6.043311045s" podCreationTimestamp="2025-11-23 08:58:54 +0000 UTC" firstStartedPulling="2025-11-23 08:58:55.954033045 +0000 UTC m=+8073.023542273" lastFinishedPulling="2025-11-23 08:58:59.492094606 +0000 UTC m=+8076.561603833" observedRunningTime="2025-11-23 08:59:00.039452174 +0000 UTC m=+8077.108961411" watchObservedRunningTime="2025-11-23 08:59:00.043311045 +0000 UTC m=+8077.112820282" Nov 23 08:59:01 crc kubenswrapper[4681]: I1123 08:59:01.034578 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vjx" event={"ID":"35e06887-a3a7-471b-a072-d55a0bfbca74","Type":"ContainerStarted","Data":"fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65"} Nov 23 08:59:01 crc kubenswrapper[4681]: I1123 08:59:01.055353 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m5vjx" podStartSLOduration=2.526549053 podStartE2EDuration="7.055332415s" podCreationTimestamp="2025-11-23 08:58:54 +0000 UTC" firstStartedPulling="2025-11-23 08:58:55.95779274 +0000 UTC m=+8073.027301976" lastFinishedPulling="2025-11-23 08:59:00.486576102 +0000 UTC m=+8077.556085338" observedRunningTime="2025-11-23 08:59:01.051809797 +0000 UTC m=+8078.121319035" watchObservedRunningTime="2025-11-23 08:59:01.055332415 +0000 UTC m=+8078.124841652" Nov 23 08:59:02 crc kubenswrapper[4681]: I1123 08:59:02.048296 4681 generic.go:334] "Generic (PLEG): container finished" podID="59c3f833-7952-4074-a531-15e4bd1a3966" containerID="e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f" exitCode=0 Nov 23 08:59:02 crc kubenswrapper[4681]: I1123 08:59:02.048385 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksfv2" event={"ID":"59c3f833-7952-4074-a531-15e4bd1a3966","Type":"ContainerDied","Data":"e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f"} Nov 23 08:59:03 crc kubenswrapper[4681]: I1123 08:59:03.060976 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksfv2" event={"ID":"59c3f833-7952-4074-a531-15e4bd1a3966","Type":"ContainerStarted","Data":"ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a"} Nov 23 08:59:04 crc kubenswrapper[4681]: I1123 08:59:04.601920 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:59:04 crc kubenswrapper[4681]: I1123 08:59:04.602387 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:59:04 crc kubenswrapper[4681]: I1123 08:59:04.816654 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:59:04 crc kubenswrapper[4681]: I1123 08:59:04.816710 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:59:05 crc kubenswrapper[4681]: I1123 08:59:05.685612 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m5vjx" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="registry-server" probeResult="failure" output=< Nov 23 08:59:05 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:59:05 crc kubenswrapper[4681]: > Nov 23 08:59:05 crc kubenswrapper[4681]: I1123 08:59:05.862987 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fcd88" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="registry-server" probeResult="failure" output=< Nov 23 08:59:05 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:59:05 crc kubenswrapper[4681]: > Nov 23 08:59:07 crc kubenswrapper[4681]: I1123 08:59:07.224661 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:59:07 crc kubenswrapper[4681]: I1123 08:59:07.228010 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:59:08 crc kubenswrapper[4681]: I1123 08:59:08.266847 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ksfv2" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="registry-server" probeResult="failure" output=< Nov 23 08:59:08 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:59:08 crc kubenswrapper[4681]: > Nov 23 08:59:14 crc kubenswrapper[4681]: I1123 08:59:14.860730 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:59:14 crc kubenswrapper[4681]: I1123 08:59:14.911713 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ksfv2" podStartSLOduration=15.385667379 podStartE2EDuration="18.910909907s" podCreationTimestamp="2025-11-23 08:58:56 +0000 UTC" firstStartedPulling="2025-11-23 08:58:59.003242453 +0000 UTC m=+8076.072751690" lastFinishedPulling="2025-11-23 08:59:02.528484991 +0000 UTC m=+8079.597994218" observedRunningTime="2025-11-23 08:59:03.086646194 +0000 UTC m=+8080.156155431" watchObservedRunningTime="2025-11-23 08:59:14.910909907 +0000 UTC m=+8091.980419145" Nov 23 08:59:14 crc kubenswrapper[4681]: I1123 08:59:14.936554 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:59:15 crc kubenswrapper[4681]: I1123 08:59:15.131547 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fcd88"] Nov 23 08:59:15 crc kubenswrapper[4681]: I1123 08:59:15.647002 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m5vjx" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="registry-server" probeResult="failure" output=< Nov 23 08:59:15 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Nov 23 08:59:15 crc kubenswrapper[4681]: > Nov 23 08:59:16 crc kubenswrapper[4681]: I1123 08:59:16.195112 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fcd88" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="registry-server" containerID="cri-o://1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8" gracePeriod=2 Nov 23 08:59:16 crc kubenswrapper[4681]: I1123 08:59:16.907191 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.107220 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-utilities\") pod \"983bf4c1-2076-491d-b173-8b0869bba32a\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.107304 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2jg\" (UniqueName: \"kubernetes.io/projected/983bf4c1-2076-491d-b173-8b0869bba32a-kube-api-access-lv2jg\") pod \"983bf4c1-2076-491d-b173-8b0869bba32a\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.107747 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-catalog-content\") pod \"983bf4c1-2076-491d-b173-8b0869bba32a\" (UID: \"983bf4c1-2076-491d-b173-8b0869bba32a\") " Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.110310 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-utilities" (OuterVolumeSpecName: "utilities") pod "983bf4c1-2076-491d-b173-8b0869bba32a" (UID: "983bf4c1-2076-491d-b173-8b0869bba32a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.129646 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983bf4c1-2076-491d-b173-8b0869bba32a-kube-api-access-lv2jg" (OuterVolumeSpecName: "kube-api-access-lv2jg") pod "983bf4c1-2076-491d-b173-8b0869bba32a" (UID: "983bf4c1-2076-491d-b173-8b0869bba32a"). InnerVolumeSpecName "kube-api-access-lv2jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.144307 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "983bf4c1-2076-491d-b173-8b0869bba32a" (UID: "983bf4c1-2076-491d-b173-8b0869bba32a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.205721 4681 generic.go:334] "Generic (PLEG): container finished" podID="983bf4c1-2076-491d-b173-8b0869bba32a" containerID="1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8" exitCode=0 Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.205772 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcd88" event={"ID":"983bf4c1-2076-491d-b173-8b0869bba32a","Type":"ContainerDied","Data":"1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8"} Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.205792 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcd88" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.205838 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcd88" event={"ID":"983bf4c1-2076-491d-b173-8b0869bba32a","Type":"ContainerDied","Data":"06ea67064ad4eca9e4869eaba4cac054477a4e1e8768bf4c86e24bf5020d7f0c"} Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.206798 4681 scope.go:117] "RemoveContainer" containerID="1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.209248 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.209288 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983bf4c1-2076-491d-b173-8b0869bba32a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.209303 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2jg\" (UniqueName: \"kubernetes.io/projected/983bf4c1-2076-491d-b173-8b0869bba32a-kube-api-access-lv2jg\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.256364 4681 scope.go:117] "RemoveContainer" containerID="62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.267918 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fcd88"] Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.269864 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fcd88"] Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.278232 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.290673 4681 scope.go:117] "RemoveContainer" containerID="fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.334300 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.334609 4681 scope.go:117] "RemoveContainer" containerID="1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8" Nov 23 08:59:17 crc kubenswrapper[4681]: E1123 08:59:17.337310 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8\": container with ID starting with 1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8 not found: ID does not exist" containerID="1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.337666 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8"} err="failed to get container status \"1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8\": rpc error: code = NotFound desc = could not find container \"1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8\": container with ID starting with 1d3e6663f357072b159b1ec0413aac8ec990816c288eacd4f9a635b833d9eba8 not found: ID does not exist" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.337702 4681 scope.go:117] "RemoveContainer" containerID="62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a" Nov 23 08:59:17 crc kubenswrapper[4681]: E1123 08:59:17.337964 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a\": container with ID starting with 62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a not found: ID does not exist" containerID="62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.337986 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a"} err="failed to get container status \"62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a\": rpc error: code = NotFound desc = could not find container \"62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a\": container with ID starting with 62e42d510f669d04834ecf5a10e16203fd86a2eecefe7e02b6fb8949d0509b3a not found: ID does not exist" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.338000 4681 scope.go:117] "RemoveContainer" containerID="fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f" Nov 23 08:59:17 crc kubenswrapper[4681]: E1123 08:59:17.338222 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f\": container with ID starting with fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f not found: ID does not exist" containerID="fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f" Nov 23 08:59:17 crc kubenswrapper[4681]: I1123 08:59:17.338241 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f"} err="failed to get container status \"fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f\": rpc error: code = NotFound desc = could not find container \"fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f\": container with ID starting with fa0bb09f03b5beb8c4b22fcec47db89662a46ed2cbf8b8c810db18e84862668f not found: ID does not exist" Nov 23 08:59:18 crc kubenswrapper[4681]: I1123 08:59:18.217744 4681 generic.go:334] "Generic (PLEG): container finished" podID="6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" containerID="294aa70a74517a306ac8b61199a69cacee09849adaa18f28534c95c2210129cb" exitCode=0 Nov 23 08:59:18 crc kubenswrapper[4681]: I1123 08:59:18.217812 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd","Type":"ContainerDied","Data":"294aa70a74517a306ac8b61199a69cacee09849adaa18f28534c95c2210129cb"} Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.263175 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" path="/var/lib/kubelet/pods/983bf4c1-2076-491d-b173-8b0869bba32a/volumes" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.525207 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksfv2"] Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.527960 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ksfv2" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="registry-server" containerID="cri-o://ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a" gracePeriod=2 Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.741413 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867235 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867280 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config-secret\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867663 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867737 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-workdir\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867785 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-config-data\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867840 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-temporary\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ssh-key\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.867985 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7ps\" (UniqueName: \"kubernetes.io/projected/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-kube-api-access-bh7ps\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.868097 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ca-certs\") pod \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\" (UID: \"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd\") " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.870528 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-config-data" (OuterVolumeSpecName: "config-data") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.871240 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.871925 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.880661 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-kube-api-access-bh7ps" (OuterVolumeSpecName: "kube-api-access-bh7ps") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "kube-api-access-bh7ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.889485 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.902775 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.954745 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.959507 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.974246 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.974289 4681 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.974309 4681 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.974322 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.974336 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7ps\" (UniqueName: \"kubernetes.io/projected/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-kube-api-access-bh7ps\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.974349 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.987345 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.995623 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" (UID: "6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:59:19 crc kubenswrapper[4681]: I1123 08:59:19.995903 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.029271 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.077370 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-utilities\") pod \"59c3f833-7952-4074-a531-15e4bd1a3966\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.077818 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-catalog-content\") pod \"59c3f833-7952-4074-a531-15e4bd1a3966\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.077865 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-utilities" (OuterVolumeSpecName: "utilities") pod "59c3f833-7952-4074-a531-15e4bd1a3966" (UID: "59c3f833-7952-4074-a531-15e4bd1a3966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.077906 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7txsj\" (UniqueName: \"kubernetes.io/projected/59c3f833-7952-4074-a531-15e4bd1a3966-kube-api-access-7txsj\") pod \"59c3f833-7952-4074-a531-15e4bd1a3966\" (UID: \"59c3f833-7952-4074-a531-15e4bd1a3966\") " Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.078686 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.078711 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.078724 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.078735 4681 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.081567 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c3f833-7952-4074-a531-15e4bd1a3966-kube-api-access-7txsj" (OuterVolumeSpecName: "kube-api-access-7txsj") pod "59c3f833-7952-4074-a531-15e4bd1a3966" (UID: "59c3f833-7952-4074-a531-15e4bd1a3966"). InnerVolumeSpecName "kube-api-access-7txsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.123229 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59c3f833-7952-4074-a531-15e4bd1a3966" (UID: "59c3f833-7952-4074-a531-15e4bd1a3966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.180268 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c3f833-7952-4074-a531-15e4bd1a3966-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.180302 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7txsj\" (UniqueName: \"kubernetes.io/projected/59c3f833-7952-4074-a531-15e4bd1a3966-kube-api-access-7txsj\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.241413 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd","Type":"ContainerDied","Data":"adfef6c45cfefbdbc10642126bc4bb146feaafd826976882c1780a7007548609"} Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.241493 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.241491 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adfef6c45cfefbdbc10642126bc4bb146feaafd826976882c1780a7007548609" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.245134 4681 generic.go:334] "Generic (PLEG): container finished" podID="59c3f833-7952-4074-a531-15e4bd1a3966" containerID="ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a" exitCode=0 Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.245315 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksfv2" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.245302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksfv2" event={"ID":"59c3f833-7952-4074-a531-15e4bd1a3966","Type":"ContainerDied","Data":"ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a"} Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.245572 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksfv2" event={"ID":"59c3f833-7952-4074-a531-15e4bd1a3966","Type":"ContainerDied","Data":"7924ec0e49a625142d880427c97678be50cc1481ba52b51a7362c2c7cf92d796"} Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.245613 4681 scope.go:117] "RemoveContainer" containerID="ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.269908 4681 scope.go:117] "RemoveContainer" containerID="e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.327413 4681 scope.go:117] "RemoveContainer" containerID="c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.338191 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksfv2"] Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.350896 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ksfv2"] Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.365251 4681 scope.go:117] "RemoveContainer" containerID="ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a" Nov 23 08:59:20 crc kubenswrapper[4681]: E1123 08:59:20.365796 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a\": container with ID starting with ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a not found: ID does not exist" containerID="ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.365850 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a"} err="failed to get container status \"ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a\": rpc error: code = NotFound desc = could not find container \"ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a\": container with ID starting with ba78406eb4b8476f69259726422e7c2bbb5199fecade2f75dbe75a2cbc3d685a not found: ID does not exist" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.365887 4681 scope.go:117] "RemoveContainer" containerID="e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f" Nov 23 08:59:20 crc kubenswrapper[4681]: E1123 08:59:20.366546 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f\": container with ID starting with e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f not found: ID does not exist" containerID="e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.366608 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f"} err="failed to get container status \"e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f\": rpc error: code = NotFound desc = could not find container \"e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f\": container with ID starting with e9291b81d0ddbfa1211a275a1f175b310c13fbc61d9ff5fa6f1975d07c68e95f not found: ID does not exist" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.366653 4681 scope.go:117] "RemoveContainer" containerID="c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4" Nov 23 08:59:20 crc kubenswrapper[4681]: E1123 08:59:20.368438 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4\": container with ID starting with c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4 not found: ID does not exist" containerID="c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4" Nov 23 08:59:20 crc kubenswrapper[4681]: I1123 08:59:20.368489 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4"} err="failed to get container status \"c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4\": rpc error: code = NotFound desc = could not find container \"c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4\": container with ID starting with c46f4125d26ec01fac117a3bce4ae8fc4a4abec34abc219a3bb57a321f5adde4 not found: ID does not exist" Nov 23 08:59:21 crc kubenswrapper[4681]: I1123 08:59:21.262392 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" path="/var/lib/kubelet/pods/59c3f833-7952-4074-a531-15e4bd1a3966/volumes" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.643020 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.687231 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.905328 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 08:59:24 crc kubenswrapper[4681]: E1123 08:59:24.907077 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="extract-content" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907104 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="extract-content" Nov 23 08:59:24 crc kubenswrapper[4681]: E1123 08:59:24.907123 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="registry-server" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907129 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="registry-server" Nov 23 08:59:24 crc kubenswrapper[4681]: E1123 08:59:24.907153 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" containerName="tempest-tests-tempest-tests-runner" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907158 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" containerName="tempest-tests-tempest-tests-runner" Nov 23 08:59:24 crc kubenswrapper[4681]: E1123 08:59:24.907171 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="extract-content" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907176 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="extract-content" Nov 23 08:59:24 crc kubenswrapper[4681]: E1123 08:59:24.907190 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="registry-server" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907195 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="registry-server" Nov 23 08:59:24 crc kubenswrapper[4681]: E1123 08:59:24.907222 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="extract-utilities" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907228 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="extract-utilities" Nov 23 08:59:24 crc kubenswrapper[4681]: E1123 08:59:24.907239 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="extract-utilities" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907244 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="extract-utilities" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907486 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="983bf4c1-2076-491d-b173-8b0869bba32a" containerName="registry-server" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907511 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c3f833-7952-4074-a531-15e4bd1a3966" containerName="registry-server" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.907543 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff8f0d7-23a9-4d32-bc42-5d2e1e4e6efd" containerName="tempest-tests-tempest-tests-runner" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.910013 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.914862 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9sd4j" Nov 23 08:59:24 crc kubenswrapper[4681]: I1123 08:59:24.959773 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.078018 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"df9f17dc-8036-4c17-8420-89a70f81ed6b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.078201 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtkn8\" (UniqueName: \"kubernetes.io/projected/df9f17dc-8036-4c17-8420-89a70f81ed6b-kube-api-access-dtkn8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"df9f17dc-8036-4c17-8420-89a70f81ed6b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.181533 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"df9f17dc-8036-4c17-8420-89a70f81ed6b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.181630 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtkn8\" (UniqueName: \"kubernetes.io/projected/df9f17dc-8036-4c17-8420-89a70f81ed6b-kube-api-access-dtkn8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"df9f17dc-8036-4c17-8420-89a70f81ed6b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.183983 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"df9f17dc-8036-4c17-8420-89a70f81ed6b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.209768 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtkn8\" (UniqueName: \"kubernetes.io/projected/df9f17dc-8036-4c17-8420-89a70f81ed6b-kube-api-access-dtkn8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"df9f17dc-8036-4c17-8420-89a70f81ed6b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.212455 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"df9f17dc-8036-4c17-8420-89a70f81ed6b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.227986 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.484304 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m5vjx"] Nov 23 08:59:25 crc kubenswrapper[4681]: I1123 08:59:25.697351 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 08:59:25 crc kubenswrapper[4681]: W1123 08:59:25.700359 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf9f17dc_8036_4c17_8420_89a70f81ed6b.slice/crio-13f062677d4d4f74c8e0ead9ca67b6c2de370c74072b8a104882303d713135b4 WatchSource:0}: Error finding container 13f062677d4d4f74c8e0ead9ca67b6c2de370c74072b8a104882303d713135b4: Status 404 returned error can't find the container with id 13f062677d4d4f74c8e0ead9ca67b6c2de370c74072b8a104882303d713135b4 Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.310096 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"df9f17dc-8036-4c17-8420-89a70f81ed6b","Type":"ContainerStarted","Data":"13f062677d4d4f74c8e0ead9ca67b6c2de370c74072b8a104882303d713135b4"} Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.310470 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m5vjx" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="registry-server" containerID="cri-o://fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65" gracePeriod=2 Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.883915 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.924972 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-utilities\") pod \"35e06887-a3a7-471b-a072-d55a0bfbca74\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.925059 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmlz7\" (UniqueName: \"kubernetes.io/projected/35e06887-a3a7-471b-a072-d55a0bfbca74-kube-api-access-dmlz7\") pod \"35e06887-a3a7-471b-a072-d55a0bfbca74\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.925120 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-catalog-content\") pod \"35e06887-a3a7-471b-a072-d55a0bfbca74\" (UID: \"35e06887-a3a7-471b-a072-d55a0bfbca74\") " Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.926653 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-utilities" (OuterVolumeSpecName: "utilities") pod "35e06887-a3a7-471b-a072-d55a0bfbca74" (UID: "35e06887-a3a7-471b-a072-d55a0bfbca74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:26 crc kubenswrapper[4681]: I1123 08:59:26.931394 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e06887-a3a7-471b-a072-d55a0bfbca74-kube-api-access-dmlz7" (OuterVolumeSpecName: "kube-api-access-dmlz7") pod "35e06887-a3a7-471b-a072-d55a0bfbca74" (UID: "35e06887-a3a7-471b-a072-d55a0bfbca74"). InnerVolumeSpecName "kube-api-access-dmlz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.017280 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35e06887-a3a7-471b-a072-d55a0bfbca74" (UID: "35e06887-a3a7-471b-a072-d55a0bfbca74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.027747 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.027779 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmlz7\" (UniqueName: \"kubernetes.io/projected/35e06887-a3a7-471b-a072-d55a0bfbca74-kube-api-access-dmlz7\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.027791 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e06887-a3a7-471b-a072-d55a0bfbca74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.329172 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"df9f17dc-8036-4c17-8420-89a70f81ed6b","Type":"ContainerStarted","Data":"f34b4c7b55e3d64d403acbcff9834b34f9a8b5d12a73a67c98c260ff43accce7"} Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.331412 4681 generic.go:334] "Generic (PLEG): container finished" podID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerID="fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65" exitCode=0 Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.331509 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vjx" event={"ID":"35e06887-a3a7-471b-a072-d55a0bfbca74","Type":"ContainerDied","Data":"fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65"} Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.331571 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vjx" event={"ID":"35e06887-a3a7-471b-a072-d55a0bfbca74","Type":"ContainerDied","Data":"1469e5a50473a82c7b2258758d362c52ee7ae53980523f506007dec8b3b3a780"} Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.331594 4681 scope.go:117] "RemoveContainer" containerID="fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.331522 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vjx" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.343084 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.263784028 podStartE2EDuration="3.34306733s" podCreationTimestamp="2025-11-23 08:59:24 +0000 UTC" firstStartedPulling="2025-11-23 08:59:25.70716993 +0000 UTC m=+8102.776679167" lastFinishedPulling="2025-11-23 08:59:26.786453232 +0000 UTC m=+8103.855962469" observedRunningTime="2025-11-23 08:59:27.34171404 +0000 UTC m=+8104.411223277" watchObservedRunningTime="2025-11-23 08:59:27.34306733 +0000 UTC m=+8104.412576566" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.362643 4681 scope.go:117] "RemoveContainer" containerID="58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.371050 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m5vjx"] Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.377419 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m5vjx"] Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.380619 4681 scope.go:117] "RemoveContainer" containerID="f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.403842 4681 scope.go:117] "RemoveContainer" containerID="fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65" Nov 23 08:59:27 crc kubenswrapper[4681]: E1123 08:59:27.404936 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65\": container with ID starting with fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65 not found: ID does not exist" containerID="fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.405016 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65"} err="failed to get container status \"fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65\": rpc error: code = NotFound desc = could not find container \"fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65\": container with ID starting with fc26773ad8ade7e4df48de173f1098d9810b7fcee3ae6641e41d47ce7c9b9e65 not found: ID does not exist" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.405047 4681 scope.go:117] "RemoveContainer" containerID="58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0" Nov 23 08:59:27 crc kubenswrapper[4681]: E1123 08:59:27.405523 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0\": container with ID starting with 58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0 not found: ID does not exist" containerID="58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.405573 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0"} err="failed to get container status \"58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0\": rpc error: code = NotFound desc = could not find container \"58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0\": container with ID starting with 58ff582dfd0c25007be301992b3fc39121bb29f361f922809cbe7c9c387f5ba0 not found: ID does not exist" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.405595 4681 scope.go:117] "RemoveContainer" containerID="f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322" Nov 23 08:59:27 crc kubenswrapper[4681]: E1123 08:59:27.405883 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322\": container with ID starting with f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322 not found: ID does not exist" containerID="f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322" Nov 23 08:59:27 crc kubenswrapper[4681]: I1123 08:59:27.405975 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322"} err="failed to get container status \"f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322\": rpc error: code = NotFound desc = could not find container \"f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322\": container with ID starting with f0baf443214d21891566810d0d879bb0d238376e342e5dd9328866aae532c322 not found: ID does not exist" Nov 23 08:59:29 crc kubenswrapper[4681]: I1123 08:59:29.262117 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" path="/var/lib/kubelet/pods/35e06887-a3a7-471b-a072-d55a0bfbca74/volumes" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.203284 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c"] Nov 23 09:00:00 crc kubenswrapper[4681]: E1123 09:00:00.204334 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="registry-server" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.204361 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="registry-server" Nov 23 09:00:00 crc kubenswrapper[4681]: E1123 09:00:00.204371 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="extract-utilities" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.204377 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="extract-utilities" Nov 23 09:00:00 crc kubenswrapper[4681]: E1123 09:00:00.204389 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="extract-content" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.204395 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="extract-content" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.204705 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e06887-a3a7-471b-a072-d55a0bfbca74" containerName="registry-server" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.205393 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.208707 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.208754 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.210155 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-secret-volume\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.210214 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-config-volume\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.210301 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvzfv\" (UniqueName: \"kubernetes.io/projected/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-kube-api-access-mvzfv\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.308354 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c"] Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.314154 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-secret-volume\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.314258 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-config-volume\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.314309 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzfv\" (UniqueName: \"kubernetes.io/projected/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-kube-api-access-mvzfv\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.323395 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-config-volume\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.337959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-secret-volume\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.338602 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzfv\" (UniqueName: \"kubernetes.io/projected/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-kube-api-access-mvzfv\") pod \"collect-profiles-29398140-vks7c\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.522084 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:00 crc kubenswrapper[4681]: I1123 09:00:00.949703 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c"] Nov 23 09:00:01 crc kubenswrapper[4681]: I1123 09:00:01.689362 4681 generic.go:334] "Generic (PLEG): container finished" podID="61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e" containerID="924d81ff86302476ac5cbabec9a25a1cb558ff7e394436cbe6c8e35f85a1e54f" exitCode=0 Nov 23 09:00:01 crc kubenswrapper[4681]: I1123 09:00:01.689508 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" event={"ID":"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e","Type":"ContainerDied","Data":"924d81ff86302476ac5cbabec9a25a1cb558ff7e394436cbe6c8e35f85a1e54f"} Nov 23 09:00:01 crc kubenswrapper[4681]: I1123 09:00:01.689766 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" event={"ID":"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e","Type":"ContainerStarted","Data":"5794074d85a917b411a1decc994c9aa35cf685e37059066fef0dd181345858fe"} Nov 23 09:00:02 crc kubenswrapper[4681]: I1123 09:00:02.990886 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.093296 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-config-volume\") pod \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.093618 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvzfv\" (UniqueName: \"kubernetes.io/projected/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-kube-api-access-mvzfv\") pod \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.093696 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-secret-volume\") pod \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\" (UID: \"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e\") " Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.094028 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-config-volume" (OuterVolumeSpecName: "config-volume") pod "61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e" (UID: "61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.094486 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.098888 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-kube-api-access-mvzfv" (OuterVolumeSpecName: "kube-api-access-mvzfv") pod "61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e" (UID: "61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e"). InnerVolumeSpecName "kube-api-access-mvzfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.101797 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e" (UID: "61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.196875 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvzfv\" (UniqueName: \"kubernetes.io/projected/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-kube-api-access-mvzfv\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.196907 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.711703 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" event={"ID":"61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e","Type":"ContainerDied","Data":"5794074d85a917b411a1decc994c9aa35cf685e37059066fef0dd181345858fe"} Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.711762 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5794074d85a917b411a1decc994c9aa35cf685e37059066fef0dd181345858fe" Nov 23 09:00:03 crc kubenswrapper[4681]: I1123 09:00:03.712128 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-vks7c" Nov 23 09:00:04 crc kubenswrapper[4681]: I1123 09:00:04.073509 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92"] Nov 23 09:00:04 crc kubenswrapper[4681]: I1123 09:00:04.078983 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-22h92"] Nov 23 09:00:05 crc kubenswrapper[4681]: I1123 09:00:05.273284 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9" path="/var/lib/kubelet/pods/8ad329d9-f38a-4cd1-a0ea-f6f88771b0d9/volumes" Nov 23 09:00:42 crc kubenswrapper[4681]: I1123 09:00:42.296053 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:00:42 crc kubenswrapper[4681]: I1123 09:00:42.296529 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:00:49 crc kubenswrapper[4681]: I1123 09:00:49.237981 4681 scope.go:117] "RemoveContainer" containerID="3a39aaffcacd41f77be272896e12dc21e06439ed6613f2b2903580ca0b67ff24" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.157936 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29398141-vkxhx"] Nov 23 09:01:00 crc kubenswrapper[4681]: E1123 09:01:00.158828 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.158843 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.159050 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bc1ae1-d2a6-47a2-93b6-d522e8c17b4e" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.159647 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.177052 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398141-vkxhx"] Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.202483 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-combined-ca-bundle\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.202592 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-config-data\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.202677 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c48dc\" (UniqueName: \"kubernetes.io/projected/43e20c13-b110-4732-9ebf-a9857afdad9a-kube-api-access-c48dc\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.202761 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-fernet-keys\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.304548 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-fernet-keys\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.304664 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-combined-ca-bundle\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.304714 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-config-data\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.304752 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c48dc\" (UniqueName: \"kubernetes.io/projected/43e20c13-b110-4732-9ebf-a9857afdad9a-kube-api-access-c48dc\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.312873 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-config-data\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.314359 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-combined-ca-bundle\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.315007 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-fernet-keys\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.321392 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c48dc\" (UniqueName: \"kubernetes.io/projected/43e20c13-b110-4732-9ebf-a9857afdad9a-kube-api-access-c48dc\") pod \"keystone-cron-29398141-vkxhx\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.473955 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:00 crc kubenswrapper[4681]: I1123 09:01:00.916500 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398141-vkxhx"] Nov 23 09:01:01 crc kubenswrapper[4681]: I1123 09:01:01.261861 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vkxhx" event={"ID":"43e20c13-b110-4732-9ebf-a9857afdad9a","Type":"ContainerStarted","Data":"1588bdcad16591566aeaed1d948d38ca749473d480b272f6e4ca3acfd19da9d4"} Nov 23 09:01:01 crc kubenswrapper[4681]: I1123 09:01:01.263001 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vkxhx" event={"ID":"43e20c13-b110-4732-9ebf-a9857afdad9a","Type":"ContainerStarted","Data":"39eeeafffe4a939be33fadcc17e9251203ba340fda88725312627ea694b9913f"} Nov 23 09:01:01 crc kubenswrapper[4681]: I1123 09:01:01.273684 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29398141-vkxhx" podStartSLOduration=1.273663654 podStartE2EDuration="1.273663654s" podCreationTimestamp="2025-11-23 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:01:01.2719525 +0000 UTC m=+8198.341461736" watchObservedRunningTime="2025-11-23 09:01:01.273663654 +0000 UTC m=+8198.343172891" Nov 23 09:01:04 crc kubenswrapper[4681]: I1123 09:01:04.294221 4681 generic.go:334] "Generic (PLEG): container finished" podID="43e20c13-b110-4732-9ebf-a9857afdad9a" containerID="1588bdcad16591566aeaed1d948d38ca749473d480b272f6e4ca3acfd19da9d4" exitCode=0 Nov 23 09:01:04 crc kubenswrapper[4681]: I1123 09:01:04.294292 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vkxhx" event={"ID":"43e20c13-b110-4732-9ebf-a9857afdad9a","Type":"ContainerDied","Data":"1588bdcad16591566aeaed1d948d38ca749473d480b272f6e4ca3acfd19da9d4"} Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.610279 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.741758 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c48dc\" (UniqueName: \"kubernetes.io/projected/43e20c13-b110-4732-9ebf-a9857afdad9a-kube-api-access-c48dc\") pod \"43e20c13-b110-4732-9ebf-a9857afdad9a\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.742183 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-fernet-keys\") pod \"43e20c13-b110-4732-9ebf-a9857afdad9a\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.742235 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-config-data\") pod \"43e20c13-b110-4732-9ebf-a9857afdad9a\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.742323 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-combined-ca-bundle\") pod \"43e20c13-b110-4732-9ebf-a9857afdad9a\" (UID: \"43e20c13-b110-4732-9ebf-a9857afdad9a\") " Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.751652 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e20c13-b110-4732-9ebf-a9857afdad9a-kube-api-access-c48dc" (OuterVolumeSpecName: "kube-api-access-c48dc") pod "43e20c13-b110-4732-9ebf-a9857afdad9a" (UID: "43e20c13-b110-4732-9ebf-a9857afdad9a"). InnerVolumeSpecName "kube-api-access-c48dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.751916 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "43e20c13-b110-4732-9ebf-a9857afdad9a" (UID: "43e20c13-b110-4732-9ebf-a9857afdad9a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.778665 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43e20c13-b110-4732-9ebf-a9857afdad9a" (UID: "43e20c13-b110-4732-9ebf-a9857afdad9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.791653 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-config-data" (OuterVolumeSpecName: "config-data") pod "43e20c13-b110-4732-9ebf-a9857afdad9a" (UID: "43e20c13-b110-4732-9ebf-a9857afdad9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.845541 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.845576 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c48dc\" (UniqueName: \"kubernetes.io/projected/43e20c13-b110-4732-9ebf-a9857afdad9a-kube-api-access-c48dc\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.845596 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:05 crc kubenswrapper[4681]: I1123 09:01:05.845604 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e20c13-b110-4732-9ebf-a9857afdad9a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[4681]: I1123 09:01:06.312946 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vkxhx" event={"ID":"43e20c13-b110-4732-9ebf-a9857afdad9a","Type":"ContainerDied","Data":"39eeeafffe4a939be33fadcc17e9251203ba340fda88725312627ea694b9913f"} Nov 23 09:01:06 crc kubenswrapper[4681]: I1123 09:01:06.313012 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39eeeafffe4a939be33fadcc17e9251203ba340fda88725312627ea694b9913f" Nov 23 09:01:06 crc kubenswrapper[4681]: I1123 09:01:06.313084 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vkxhx" Nov 23 09:01:12 crc kubenswrapper[4681]: I1123 09:01:12.297274 4681 patch_prober.go:28] interesting pod/machine-config-daemon-wh4gt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:01:12 crc kubenswrapper[4681]: I1123 09:01:12.297914 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wh4gt" podUID="539dc58c-e752-43c8-bdef-af87528b76f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"